Test Report: Docker_Linux_crio 22343

                    
                      72a35eba785b899784aeadb9114946ce54d68eef:2025-12-27:43008
                    
                

Test fail (26/332)

x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable volcano --alsologtostderr -v=1: exit status 11 (237.576549ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:23.070411  386460 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:23.070510  386460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:23.070518  386460 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:23.070522  386460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:23.070723  386460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:23.070996  386460 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:23.071279  386460 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:23.071297  386460 addons.go:622] checking whether the cluster is paused
	I1227 09:07:23.071368  386460 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:23.071381  386460 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:23.071755  386460 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:23.090498  386460 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:23.090551  386460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:23.108434  386460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:23.197978  386460 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:23.198055  386460 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:23.226206  386460 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:23.226226  386460 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:23.226230  386460 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:23.226234  386460 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:23.226236  386460 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:23.226239  386460 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:23.226242  386460 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:23.226245  386460 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:23.226248  386460 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:23.226254  386460 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:23.226257  386460 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:23.226259  386460 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:23.226262  386460 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:23.226265  386460 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:23.226268  386460 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:23.226273  386460 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:23.226275  386460 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:23.226280  386460 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:23.226282  386460 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:23.226285  386460 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:23.226291  386460 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:23.226294  386460 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:23.226299  386460 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:23.226302  386460 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:23.226305  386460 cri.go:96] found id: ""
	I1227 09:07:23.226343  386460 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:23.239701  386460 out.go:203] 
	W1227 09:07:23.240716  386460 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:23.240731  386460 out.go:285] * 
	* 
	W1227 09:07:23.242455  386460 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:23.243405  386460 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.131705ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-k2xwn" [d69e8f29-5ac8-42ea-977a-4f5c22f21b1d] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002534209s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-m2g76" [67fbed3e-1c42-4176-9b92-f3448feddb21] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003102394s
addons_test.go:394: (dbg) Run:  kubectl --context addons-102660 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-102660 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-102660 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.358570477s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 ip
2025/12/27 09:07:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable registry --alsologtostderr -v=1: exit status 11 (234.791682ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:47.622213  388642 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:47.622504  388642 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:47.622517  388642 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:47.622524  388642 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:47.622726  388642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:47.623028  388642 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:47.623334  388642 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:47.623355  388642 addons.go:622] checking whether the cluster is paused
	I1227 09:07:47.623445  388642 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:47.623460  388642 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:47.623877  388642 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:47.642872  388642 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:47.642942  388642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:47.661784  388642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:47.750105  388642 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:47.750211  388642 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:47.779082  388642 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:47.779101  388642 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:47.779105  388642 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:47.779108  388642 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:47.779110  388642 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:47.779113  388642 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:47.779116  388642 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:47.779119  388642 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:47.779122  388642 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:47.779127  388642 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:47.779130  388642 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:47.779132  388642 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:47.779135  388642 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:47.779137  388642 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:47.779141  388642 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:47.779149  388642 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:47.779152  388642 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:47.779157  388642 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:47.779159  388642 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:47.779162  388642 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:47.779165  388642 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:47.779167  388642 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:47.779170  388642 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:47.779172  388642 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:47.779175  388642 cri.go:96] found id: ""
	I1227 09:07:47.779218  388642 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:47.792352  388642 out.go:203] 
	W1227 09:07:47.793331  388642 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:47.793363  388642 out.go:285] * 
	* 
	W1227 09:07:47.795005  388642 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:47.796146  388642 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.80s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.118305ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-102660
addons_test.go:334: (dbg) Run:  kubectl --context addons-102660 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (246.541499ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:48.011887  388736 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:48.012182  388736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:48.012197  388736 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:48.012203  388736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:48.012511  388736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:48.012896  388736 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:48.013348  388736 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:48.013382  388736 addons.go:622] checking whether the cluster is paused
	I1227 09:07:48.013519  388736 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:48.013541  388736 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:48.014080  388736 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:48.031904  388736 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:48.031954  388736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:48.048682  388736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:48.141862  388736 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:48.141957  388736 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:48.176344  388736 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:48.176363  388736 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:48.176369  388736 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:48.176396  388736 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:48.176406  388736 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:48.176412  388736 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:48.176417  388736 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:48.176422  388736 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:48.176427  388736 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:48.176436  388736 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:48.176444  388736 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:48.176449  388736 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:48.176457  388736 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:48.176462  388736 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:48.176470  388736 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:48.176481  388736 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:48.176486  388736 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:48.176491  388736 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:48.176499  388736 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:48.176503  388736 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:48.176509  388736 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:48.176516  388736 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:48.176521  388736 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:48.176529  388736 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:48.176534  388736 cri.go:96] found id: ""
	I1227 09:07:48.176581  388736 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:48.191956  388736 out.go:203] 
	W1227 09:07:48.192850  388736 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:48.192871  388736 out.go:285] * 
	* 
	W1227 09:07:48.194932  388736 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:48.195766  388736 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-102660 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-102660 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-102660 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [03b738d0-aa48-4517-8759-2cd3349bfe91] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [03b738d0-aa48-4517-8759-2cd3349bfe91] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003622895s
I1227 09:07:57.632555  377171 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-102660 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (246.779169ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:58.384341  390135 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:58.384451  390135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:58.384461  390135 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:58.384465  390135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:58.384671  390135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:58.384981  390135 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:58.385287  390135 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:58.385313  390135 addons.go:622] checking whether the cluster is paused
	I1227 09:07:58.385398  390135 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:58.385410  390135 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:58.385747  390135 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:58.402973  390135 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:58.403029  390135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:58.421270  390135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:58.511127  390135 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:58.511239  390135 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:58.549166  390135 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:58.549185  390135 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:58.549190  390135 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:58.549193  390135 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:58.549196  390135 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:58.549200  390135 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:58.549202  390135 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:58.549205  390135 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:58.549207  390135 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:58.549221  390135 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:58.549224  390135 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:58.549227  390135 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:58.549230  390135 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:58.549233  390135 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:58.549235  390135 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:58.549247  390135 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:58.549250  390135 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:58.549254  390135 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:58.549257  390135 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:58.549260  390135 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:58.549265  390135 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:58.549270  390135 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:58.549273  390135 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:58.549275  390135 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:58.549278  390135 cri.go:96] found id: ""
	I1227 09:07:58.549317  390135 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:58.565940  390135 out.go:203] 
	W1227 09:07:58.567041  390135 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:58.567060  390135 out.go:285] * 
	* 
	W1227 09:07:58.569109  390135 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:58.570186  390135 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable ingress --alsologtostderr -v=1: exit status 11 (259.402004ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:58.646246  390466 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:58.646403  390466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:58.646418  390466 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:58.646425  390466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:58.646627  390466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:58.646980  390466 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:58.647383  390466 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:58.647411  390466 addons.go:622] checking whether the cluster is paused
	I1227 09:07:58.647538  390466 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:58.647557  390466 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:58.647989  390466 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:58.667662  390466 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:58.667777  390466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:58.686779  390466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:58.778198  390466 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:58.778274  390466 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:58.811717  390466 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:58.811749  390466 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:58.811756  390466 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:58.811761  390466 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:58.811765  390466 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:58.811771  390466 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:58.811775  390466 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:58.811778  390466 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:58.811782  390466 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:58.811827  390466 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:58.811837  390466 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:58.811841  390466 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:58.811846  390466 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:58.811850  390466 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:58.811854  390466 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:58.811874  390466 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:58.811886  390466 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:58.811893  390466 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:58.811901  390466 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:58.811904  390466 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:58.811913  390466 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:58.811920  390466 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:58.811925  390466 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:58.811931  390466 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:58.811936  390466 cri.go:96] found id: ""
	I1227 09:07:58.812012  390466 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:58.826367  390466 out.go:203] 
	W1227 09:07:58.827323  390466 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:58.827342  390466 out.go:285] * 
	* 
	W1227 09:07:58.829415  390466 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:58.830272  390466 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (10.63s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-mqf57" [01b67437-ad76-44b7-8a31-03dfe63a317f] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003749721s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (239.130211ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:48.142076  388774 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:48.142386  388774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:48.142399  388774 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:48.142406  388774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:48.142714  388774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:48.143129  388774 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:48.143608  388774 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:48.143640  388774 addons.go:622] checking whether the cluster is paused
	I1227 09:07:48.143787  388774 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:48.143820  388774 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:48.144362  388774 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:48.165133  388774 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:48.165192  388774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:48.183693  388774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:48.275095  388774 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:48.275194  388774 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:48.303435  388774 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:48.303457  388774 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:48.303461  388774 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:48.303464  388774 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:48.303467  388774 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:48.303470  388774 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:48.303472  388774 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:48.303475  388774 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:48.303478  388774 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:48.303483  388774 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:48.303486  388774 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:48.303488  388774 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:48.303491  388774 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:48.303494  388774 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:48.303496  388774 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:48.303505  388774 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:48.303508  388774 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:48.303513  388774 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:48.303516  388774 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:48.303519  388774 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:48.303525  388774 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:48.303528  388774 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:48.303531  388774 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:48.303534  388774 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:48.303537  388774 cri.go:96] found id: ""
	I1227 09:07:48.303572  388774 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:48.316770  388774 out.go:203] 
	W1227 09:07:48.317776  388774 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:48.317817  388774 out.go:285] * 
	* 
	W1227 09:07:48.319427  388774 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:48.320318  388774 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.048418ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-qh2dz" [f956880f-1ade-4d76-95bd-0926d1bbefc2] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003462014s
addons_test.go:465: (dbg) Run:  kubectl --context addons-102660 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (243.382417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:51.108776  389400 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:51.109033  389400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:51.109043  389400 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:51.109047  389400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:51.109267  389400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:51.109537  389400 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:51.109869  389400 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:51.109891  389400 addons.go:622] checking whether the cluster is paused
	I1227 09:07:51.109970  389400 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:51.109983  389400 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:51.110370  389400 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:51.131215  389400 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:51.131270  389400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:51.148238  389400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:51.239089  389400 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:51.239209  389400 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:51.268739  389400 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:51.268759  389400 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:51.268763  389400 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:51.268765  389400 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:51.268768  389400 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:51.268771  389400 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:51.268774  389400 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:51.268777  389400 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:51.268779  389400 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:51.268784  389400 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:51.268787  389400 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:51.268816  389400 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:51.268821  389400 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:51.268825  389400 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:51.268830  389400 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:51.268840  389400 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:51.268843  389400 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:51.268856  389400 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:51.268862  389400 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:51.268865  389400 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:51.268868  389400 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:51.268871  389400 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:51.268874  389400 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:51.268877  389400 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:51.268879  389400 cri.go:96] found id: ""
	I1227 09:07:51.268917  389400 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:51.283633  389400 out.go:203] 
	W1227 09:07:51.285191  389400 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:51.285215  389400 out.go:285] * 
	* 
	W1227 09:07:51.287669  389400 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:51.288814  389400 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 09:07:44.481727  377171 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 09:07:44.484847  377171 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 09:07:44.484871  377171 kapi.go:107] duration metric: took 3.168276ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.179098ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-102660 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-102660 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [ca2dfd77-87c8-45ff-a629-a414c748b1cb] Pending
helpers_test.go:353: "task-pv-pod" [ca2dfd77-87c8-45ff-a629-a414c748b1cb] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.003562409s
addons_test.go:574: (dbg) Run:  kubectl --context addons-102660 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-102660 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-102660 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-102660 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-102660 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-102660 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-102660 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [890f34f6-781c-4c12-b53a-39a4f641343d] Pending
helpers_test.go:353: "task-pv-pod-restore" [890f34f6-781c-4c12-b53a-39a4f641343d] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003717321s
addons_test.go:616: (dbg) Run:  kubectl --context addons-102660 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-102660 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-102660 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (232.427092ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:08:19.920250  391117 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:08:19.920363  391117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:08:19.920371  391117 out.go:374] Setting ErrFile to fd 2...
	I1227 09:08:19.920375  391117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:08:19.920570  391117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:08:19.920869  391117 mustload.go:66] Loading cluster: addons-102660
	I1227 09:08:19.921222  391117 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:08:19.921247  391117 addons.go:622] checking whether the cluster is paused
	I1227 09:08:19.921334  391117 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:08:19.921348  391117 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:08:19.921735  391117 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:08:19.939987  391117 ssh_runner.go:195] Run: systemctl --version
	I1227 09:08:19.940052  391117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:08:19.957241  391117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:08:20.046066  391117 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:08:20.046160  391117 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:08:20.075677  391117 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:08:20.075702  391117 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:08:20.075706  391117 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:08:20.075710  391117 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:08:20.075713  391117 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:08:20.075718  391117 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:08:20.075720  391117 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:08:20.075723  391117 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:08:20.075725  391117 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:08:20.075732  391117 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:08:20.075737  391117 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:08:20.075741  391117 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:08:20.075745  391117 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:08:20.075750  391117 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:08:20.075759  391117 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:08:20.075767  391117 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:08:20.075770  391117 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:08:20.075774  391117 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:08:20.075777  391117 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:08:20.075780  391117 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:08:20.075787  391117 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:08:20.075812  391117 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:08:20.075817  391117 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:08:20.075825  391117 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:08:20.075830  391117 cri.go:96] found id: ""
	I1227 09:08:20.075877  391117 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:08:20.090180  391117 out.go:203] 
	W1227 09:08:20.091250  391117 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:08:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:08:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:08:20.091268  391117 out.go:285] * 
	* 
	W1227 09:08:20.092908  391117 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:08:20.093926  391117 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (231.831236ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:08:20.154209  391181 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:08:20.154453  391181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:08:20.154462  391181 out.go:374] Setting ErrFile to fd 2...
	I1227 09:08:20.154466  391181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:08:20.154675  391181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:08:20.154995  391181 mustload.go:66] Loading cluster: addons-102660
	I1227 09:08:20.155347  391181 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:08:20.155370  391181 addons.go:622] checking whether the cluster is paused
	I1227 09:08:20.155456  391181 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:08:20.155469  391181 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:08:20.155907  391181 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:08:20.173932  391181 ssh_runner.go:195] Run: systemctl --version
	I1227 09:08:20.174013  391181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:08:20.191149  391181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:08:20.280054  391181 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:08:20.280145  391181 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:08:20.308617  391181 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:08:20.308643  391181 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:08:20.308647  391181 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:08:20.308652  391181 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:08:20.308655  391181 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:08:20.308659  391181 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:08:20.308662  391181 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:08:20.308665  391181 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:08:20.308670  391181 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:08:20.308678  391181 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:08:20.308683  391181 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:08:20.308693  391181 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:08:20.308698  391181 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:08:20.308706  391181 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:08:20.308718  391181 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:08:20.308734  391181 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:08:20.308739  391181 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:08:20.308743  391181 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:08:20.308746  391181 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:08:20.308748  391181 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:08:20.308751  391181 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:08:20.308755  391181 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:08:20.308760  391181 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:08:20.308768  391181 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:08:20.308773  391181 cri.go:96] found id: ""
	I1227 09:08:20.308831  391181 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:08:20.322587  391181 out.go:203] 
	W1227 09:08:20.323691  391181 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:08:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:08:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:08:20.323726  391181 out.go:285] * 
	* 
	W1227 09:08:20.325464  391181 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:08:20.326421  391181 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (35.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-102660 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-102660 --alsologtostderr -v=1: exit status 11 (251.334401ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:33.059842  386810 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:33.060122  386810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:33.060132  386810 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:33.060136  386810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:33.060358  386810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:33.060641  386810 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:33.060981  386810 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:33.061004  386810 addons.go:622] checking whether the cluster is paused
	I1227 09:07:33.061089  386810 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:33.061101  386810 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:33.061602  386810 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:33.080750  386810 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:33.080858  386810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:33.099449  386810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:33.192088  386810 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:33.192226  386810 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:33.221929  386810 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:33.221954  386810 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:33.221961  386810 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:33.221966  386810 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:33.221970  386810 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:33.221974  386810 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:33.221976  386810 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:33.221981  386810 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:33.221986  386810 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:33.221993  386810 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:33.221997  386810 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:33.222002  386810 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:33.222010  386810 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:33.222015  386810 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:33.222023  386810 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:33.222033  386810 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:33.222038  386810 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:33.222044  386810 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:33.222050  386810 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:33.222057  386810 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:33.222061  386810 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:33.222067  386810 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:33.222069  386810 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:33.222076  386810 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:33.222082  386810 cri.go:96] found id: ""
	I1227 09:07:33.222132  386810 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:33.236781  386810 out.go:203] 
	W1227 09:07:33.237820  386810 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:33.237843  386810 out.go:285] * 
	* 
	W1227 09:07:33.240174  386810 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:33.243934  386810 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-102660 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-102660
helpers_test.go:244: (dbg) docker inspect addons-102660:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd7a588dca4366306f6e599c21a86f09c810be54ccd0d95bc2de6ff4190b108e",
	        "Created": "2025-12-27T09:06:05.182469105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 379220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:06:05.209115928Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/fd7a588dca4366306f6e599c21a86f09c810be54ccd0d95bc2de6ff4190b108e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd7a588dca4366306f6e599c21a86f09c810be54ccd0d95bc2de6ff4190b108e/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd7a588dca4366306f6e599c21a86f09c810be54ccd0d95bc2de6ff4190b108e/hosts",
	        "LogPath": "/var/lib/docker/containers/fd7a588dca4366306f6e599c21a86f09c810be54ccd0d95bc2de6ff4190b108e/fd7a588dca4366306f6e599c21a86f09c810be54ccd0d95bc2de6ff4190b108e-json.log",
	        "Name": "/addons-102660",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-102660:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-102660",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd7a588dca4366306f6e599c21a86f09c810be54ccd0d95bc2de6ff4190b108e",
	                "LowerDir": "/var/lib/docker/overlay2/146676c2bfd3fb8057079a910df5804c4a3d17863d90d6ca0da01ca19fc4ee5e-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/146676c2bfd3fb8057079a910df5804c4a3d17863d90d6ca0da01ca19fc4ee5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/146676c2bfd3fb8057079a910df5804c4a3d17863d90d6ca0da01ca19fc4ee5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/146676c2bfd3fb8057079a910df5804c4a3d17863d90d6ca0da01ca19fc4ee5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-102660",
	                "Source": "/var/lib/docker/volumes/addons-102660/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-102660",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-102660",
	                "name.minikube.sigs.k8s.io": "addons-102660",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8f8388154e3d0cc8002e8d8821e2b51a336f31d57241a4e7a014a3fec3d24761",
	            "SandboxKey": "/var/run/docker/netns/8f8388154e3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-102660": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4b2e137690bd9555da69745dd2ba28c7632aca6bcb8c7c64686821c6b5cee9c8",
	                    "EndpointID": "32802c8bb302f3384eb1c00bf301e2be524df712fa6adeebf8b81909b73e3ab0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "5e:89:8a:ff:f7:11",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-102660",
	                        "fd7a588dca43"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-102660 -n addons-102660
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-102660 logs -n 25: (1.070412701s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-828881 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-828881   │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │ 27 Dec 25 09:05 UTC │
	│ delete  │ -p download-only-828881                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-828881   │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │ 27 Dec 25 09:05 UTC │
	│ start   │ -o=json --download-only -p download-only-917129 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-917129   │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │ 27 Dec 25 09:05 UTC │
	│ delete  │ -p download-only-917129                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-917129   │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │ 27 Dec 25 09:05 UTC │
	│ delete  │ -p download-only-828881                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-828881   │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │ 27 Dec 25 09:05 UTC │
	│ delete  │ -p download-only-917129                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-917129   │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │ 27 Dec 25 09:05 UTC │
	│ start   │ --download-only -p download-docker-804666 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-804666 │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │                     │
	│ delete  │ -p download-docker-804666                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-804666 │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │ 27 Dec 25 09:05 UTC │
	│ start   │ --download-only -p binary-mirror-086395 --alsologtostderr --binary-mirror http://127.0.0.1:37011 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-086395   │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │                     │
	│ delete  │ -p binary-mirror-086395                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-086395   │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │ 27 Dec 25 09:05 UTC │
	│ addons  │ disable dashboard -p addons-102660                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-102660          │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │                     │
	│ addons  │ enable dashboard -p addons-102660                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-102660          │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │                     │
	│ start   │ -p addons-102660 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-102660          │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │ 27 Dec 25 09:07 UTC │
	│ addons  │ addons-102660 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-102660          │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ addons  │ addons-102660 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-102660          │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ addons  │ enable headlamp -p addons-102660 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-102660          │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:05:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:05:42.680820  378582 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:05:42.681048  378582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:05:42.681057  378582 out.go:374] Setting ErrFile to fd 2...
	I1227 09:05:42.681061  378582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:05:42.681214  378582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:05:42.681695  378582 out.go:368] Setting JSON to false
	I1227 09:05:42.682608  378582 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2887,"bootTime":1766823456,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:05:42.682662  378582 start.go:143] virtualization: kvm guest
	I1227 09:05:42.684190  378582 out.go:179] * [addons-102660] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:05:42.685154  378582 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:05:42.685192  378582 notify.go:221] Checking for updates...
	I1227 09:05:42.686951  378582 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:05:42.687969  378582 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:05:42.688913  378582 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:05:42.689717  378582 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:05:42.690595  378582 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:05:42.691665  378582 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:05:42.715156  378582 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:05:42.715228  378582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:05:42.768004  378582 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-27 09:05:42.75882435 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:05:42.768155  378582 docker.go:319] overlay module found
	I1227 09:05:42.769655  378582 out.go:179] * Using the docker driver based on user configuration
	I1227 09:05:42.770596  378582 start.go:309] selected driver: docker
	I1227 09:05:42.770630  378582 start.go:928] validating driver "docker" against <nil>
	I1227 09:05:42.770650  378582 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:05:42.771398  378582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:05:42.824238  378582 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-27 09:05:42.815086299 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:05:42.824386  378582 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:05:42.824606  378582 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:05:42.825982  378582 out.go:179] * Using Docker driver with root privileges
	I1227 09:05:42.826977  378582 cni.go:84] Creating CNI manager for ""
	I1227 09:05:42.827043  378582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:05:42.827054  378582 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:05:42.827112  378582 start.go:353] cluster config:
	{Name:addons-102660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-102660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:05:42.828169  378582 out.go:179] * Starting "addons-102660" primary control-plane node in "addons-102660" cluster
	I1227 09:05:42.829119  378582 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:05:42.830036  378582 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:05:42.830877  378582 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:05:42.830903  378582 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:05:42.830912  378582 cache.go:65] Caching tarball of preloaded images
	I1227 09:05:42.830962  378582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:05:42.830999  378582 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:05:42.831015  378582 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:05:42.831359  378582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/config.json ...
	I1227 09:05:42.831385  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/config.json: {Name:mk505c7ce046ec1ca0db5e6a29a54896d5deee65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:05:42.846661  378582 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:05:42.846826  378582 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 09:05:42.846847  378582 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory, skipping pull
	I1227 09:05:42.846856  378582 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in cache, skipping pull
	I1227 09:05:42.846869  378582 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a as a tarball
	I1227 09:05:42.846879  378582 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from local cache
	I1227 09:05:55.810895  378582 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from cached tarball
	I1227 09:05:55.810960  378582 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:05:55.811021  378582 start.go:360] acquireMachinesLock for addons-102660: {Name:mk532cd88b2020eaa8237d4b2a09bb436bbe2308 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:05:55.811139  378582 start.go:364] duration metric: took 96.447µs to acquireMachinesLock for "addons-102660"
	I1227 09:05:55.811164  378582 start.go:93] Provisioning new machine with config: &{Name:addons-102660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-102660 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:05:55.811245  378582 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:05:55.873282  378582 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1227 09:05:55.873634  378582 start.go:159] libmachine.API.Create for "addons-102660" (driver="docker")
	I1227 09:05:55.873692  378582 client.go:173] LocalClient.Create starting
	I1227 09:05:55.873826  378582 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:05:56.019056  378582 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:05:56.139399  378582 cli_runner.go:164] Run: docker network inspect addons-102660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:05:56.156063  378582 cli_runner.go:211] docker network inspect addons-102660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:05:56.156131  378582 network_create.go:284] running [docker network inspect addons-102660] to gather additional debugging logs...
	I1227 09:05:56.156148  378582 cli_runner.go:164] Run: docker network inspect addons-102660
	W1227 09:05:56.171563  378582 cli_runner.go:211] docker network inspect addons-102660 returned with exit code 1
	I1227 09:05:56.171601  378582 network_create.go:287] error running [docker network inspect addons-102660]: docker network inspect addons-102660: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-102660 not found
	I1227 09:05:56.171622  378582 network_create.go:289] output of [docker network inspect addons-102660]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-102660 not found
	
	** /stderr **
	I1227 09:05:56.171710  378582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:05:56.189134  378582 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c7fe90}
	I1227 09:05:56.189183  378582 network_create.go:124] attempt to create docker network addons-102660 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1227 09:05:56.189233  378582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-102660 addons-102660
	I1227 09:05:56.438805  378582 network_create.go:108] docker network addons-102660 192.168.49.0/24 created
	I1227 09:05:56.438846  378582 kic.go:121] calculated static IP "192.168.49.2" for the "addons-102660" container
	I1227 09:05:56.438953  378582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:05:56.455661  378582 cli_runner.go:164] Run: docker volume create addons-102660 --label name.minikube.sigs.k8s.io=addons-102660 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:05:56.564132  378582 oci.go:103] Successfully created a docker volume addons-102660
	I1227 09:05:56.564240  378582 cli_runner.go:164] Run: docker run --rm --name addons-102660-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-102660 --entrypoint /usr/bin/test -v addons-102660:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:06:01.452450  378582 cli_runner.go:217] Completed: docker run --rm --name addons-102660-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-102660 --entrypoint /usr/bin/test -v addons-102660:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (4.888170469s)
	I1227 09:06:01.452482  378582 oci.go:107] Successfully prepared a docker volume addons-102660
	I1227 09:06:01.452554  378582 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:06:01.452567  378582 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:06:01.452624  378582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-102660:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:06:05.115930  378582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-102660:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.663233091s)
	I1227 09:06:05.115974  378582 kic.go:203] duration metric: took 3.663400851s to extract preloaded images to volume ...
	W1227 09:06:05.116075  378582 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:06:05.116113  378582 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:06:05.116161  378582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:06:05.167230  378582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-102660 --name addons-102660 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-102660 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-102660 --network addons-102660 --ip 192.168.49.2 --volume addons-102660:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:06:05.416876  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Running}}
	I1227 09:06:05.434317  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:05.450086  378582 cli_runner.go:164] Run: docker exec addons-102660 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:06:05.497529  378582 oci.go:144] the created container "addons-102660" has a running status.
	I1227 09:06:05.497564  378582 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa...
	I1227 09:06:05.579322  378582 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:06:05.601335  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:05.618064  378582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:06:05.618085  378582 kic_runner.go:114] Args: [docker exec --privileged addons-102660 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:06:05.677811  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:05.694519  378582 machine.go:94] provisionDockerMachine start ...
	I1227 09:06:05.694615  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:05.711996  378582 main.go:144] libmachine: Using SSH client type: native
	I1227 09:06:05.712225  378582 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1227 09:06:05.712236  378582 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:06:05.712957  378582 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46352->127.0.0.1:33143: read: connection reset by peer
	I1227 09:06:08.834169  378582 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-102660
	
	I1227 09:06:08.834201  378582 ubuntu.go:182] provisioning hostname "addons-102660"
	I1227 09:06:08.834256  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:08.851714  378582 main.go:144] libmachine: Using SSH client type: native
	I1227 09:06:08.851950  378582 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1227 09:06:08.851963  378582 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-102660 && echo "addons-102660" | sudo tee /etc/hostname
	I1227 09:06:08.978569  378582 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-102660
	
	I1227 09:06:08.978648  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:08.996264  378582 main.go:144] libmachine: Using SSH client type: native
	I1227 09:06:08.996478  378582 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1227 09:06:08.996495  378582 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-102660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-102660/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-102660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:06:09.115955  378582 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:06:09.115986  378582 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:06:09.116006  378582 ubuntu.go:190] setting up certificates
	I1227 09:06:09.116027  378582 provision.go:84] configureAuth start
	I1227 09:06:09.116081  378582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-102660
	I1227 09:06:09.134721  378582 provision.go:143] copyHostCerts
	I1227 09:06:09.134804  378582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:06:09.134947  378582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:06:09.135019  378582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:06:09.135082  378582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.addons-102660 san=[127.0.0.1 192.168.49.2 addons-102660 localhost minikube]
	I1227 09:06:09.254456  378582 provision.go:177] copyRemoteCerts
	I1227 09:06:09.254517  378582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:06:09.254560  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:09.272145  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:09.361772  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:06:09.380248  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 09:06:09.396778  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:06:09.412874  378582 provision.go:87] duration metric: took 296.824323ms to configureAuth
	I1227 09:06:09.412900  378582 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:06:09.413046  378582 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:06:09.413145  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:09.430475  378582 main.go:144] libmachine: Using SSH client type: native
	I1227 09:06:09.430690  378582 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1227 09:06:09.430707  378582 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:06:09.682856  378582 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:06:09.682886  378582 machine.go:97] duration metric: took 3.988346292s to provisionDockerMachine
	I1227 09:06:09.682899  378582 client.go:176] duration metric: took 13.809198477s to LocalClient.Create
	I1227 09:06:09.682927  378582 start.go:167] duration metric: took 13.809294639s to libmachine.API.Create "addons-102660"
	I1227 09:06:09.682938  378582 start.go:293] postStartSetup for "addons-102660" (driver="docker")
	I1227 09:06:09.682953  378582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:06:09.683046  378582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:06:09.683095  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:09.700465  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:09.790837  378582 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:06:09.794111  378582 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:06:09.794137  378582 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:06:09.794147  378582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:06:09.794201  378582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:06:09.794224  378582 start.go:296] duration metric: took 111.279683ms for postStartSetup
	I1227 09:06:09.794487  378582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-102660
	I1227 09:06:09.810921  378582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/config.json ...
	I1227 09:06:09.811187  378582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:06:09.811231  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:09.827310  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:09.913270  378582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:06:09.917633  378582 start.go:128] duration metric: took 14.106370371s to createHost
	I1227 09:06:09.917656  378582 start.go:83] releasing machines lock for "addons-102660", held for 14.106505562s
	I1227 09:06:09.917722  378582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-102660
	I1227 09:06:09.934276  378582 ssh_runner.go:195] Run: cat /version.json
	I1227 09:06:09.934327  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:09.934369  378582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:06:09.934442  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:09.952358  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:09.954066  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:10.092923  378582 ssh_runner.go:195] Run: systemctl --version
	I1227 09:06:10.098895  378582 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:06:10.131551  378582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:06:10.135932  378582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:06:10.135989  378582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:06:10.160034  378582 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 09:06:10.160058  378582 start.go:496] detecting cgroup driver to use...
	I1227 09:06:10.160090  378582 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:06:10.160139  378582 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:06:10.175912  378582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:06:10.187141  378582 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:06:10.187184  378582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:06:10.202136  378582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:06:10.218130  378582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:06:10.297853  378582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:06:10.379691  378582 docker.go:234] disabling docker service ...
	I1227 09:06:10.379747  378582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:06:10.397762  378582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:06:10.410047  378582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:06:10.493346  378582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:06:10.577122  378582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:06:10.588877  378582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:06:10.602131  378582 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:06:10.602188  378582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:06:10.611356  378582 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:06:10.611412  378582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:06:10.619492  378582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:06:10.627447  378582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:06:10.635731  378582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:06:10.643149  378582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:06:10.651123  378582 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:06:10.663328  378582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:06:10.671152  378582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:06:10.677842  378582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:06:10.684376  378582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:06:10.760829  378582 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:06:10.888855  378582 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:06:10.888939  378582 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:06:10.892727  378582 start.go:574] Will wait 60s for crictl version
	I1227 09:06:10.892783  378582 ssh_runner.go:195] Run: which crictl
	I1227 09:06:10.896192  378582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:06:10.919064  378582 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:06:10.919130  378582 ssh_runner.go:195] Run: crio --version
	I1227 09:06:10.945681  378582 ssh_runner.go:195] Run: crio --version
	I1227 09:06:10.973498  378582 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:06:10.974526  378582 cli_runner.go:164] Run: docker network inspect addons-102660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:06:10.991246  378582 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:06:10.995072  378582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:06:11.004706  378582 kubeadm.go:884] updating cluster {Name:addons-102660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-102660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:06:11.004829  378582 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:06:11.004870  378582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:06:11.036296  378582 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:06:11.036317  378582 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:06:11.036360  378582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:06:11.061285  378582 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:06:11.061309  378582 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:06:11.061319  378582 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 09:06:11.061420  378582 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-102660 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-102660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:06:11.061497  378582 ssh_runner.go:195] Run: crio config
	I1227 09:06:11.103417  378582 cni.go:84] Creating CNI manager for ""
	I1227 09:06:11.103451  378582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:06:11.103472  378582 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:06:11.103500  378582 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-102660 NodeName:addons-102660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:06:11.103626  378582 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-102660"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:06:11.103696  378582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:06:11.111456  378582 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:06:11.111525  378582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:06:11.119047  378582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 09:06:11.130610  378582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:06:11.144187  378582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1227 09:06:11.155499  378582 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:06:11.158687  378582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:06:11.167673  378582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:06:11.247695  378582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:06:11.269811  378582 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660 for IP: 192.168.49.2
	I1227 09:06:11.269837  378582 certs.go:195] generating shared ca certs ...
	I1227 09:06:11.269858  378582 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:11.269997  378582 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:06:11.495069  378582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt ...
	I1227 09:06:11.495104  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt: {Name:mk442054ab52d363554f25d3980121224f7a76c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:11.495306  378582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key ...
	I1227 09:06:11.495320  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key: {Name:mk0ed826ed3104892d132c46d13d0c2dc43f1165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:11.495409  378582 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:06:11.692359  378582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt ...
	I1227 09:06:11.692390  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt: {Name:mk47b4bda7f22933ac40e00f46813ac78a08833d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:11.692585  378582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key ...
	I1227 09:06:11.692602  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key: {Name:mk1668ff688451e434ebe16e80610c22b16c6a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:11.692684  378582 certs.go:257] generating profile certs ...
	I1227 09:06:11.692753  378582 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.key
	I1227 09:06:11.692767  378582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt with IP's: []
	I1227 09:06:11.861076  378582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt ...
	I1227 09:06:11.861103  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: {Name:mke6bffacbdd18c0143a2790b23ef126b0330c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:11.861254  378582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.key ...
	I1227 09:06:11.861265  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.key: {Name:mk1967891401dfb562acf44b54a94bf8456bc244 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:11.861331  378582 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.key.d2ac21cd
	I1227 09:06:11.861349  378582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.crt.d2ac21cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1227 09:06:11.885845  378582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.crt.d2ac21cd ...
	I1227 09:06:11.885863  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.crt.d2ac21cd: {Name:mk3744f718dc2948b56117b5de8225d829fe01bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:11.885962  378582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.key.d2ac21cd ...
	I1227 09:06:11.885974  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.key.d2ac21cd: {Name:mk109d5fc7fcff08db905794eb117466f7ca5943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:11.886040  378582 certs.go:382] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.crt.d2ac21cd -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.crt
	I1227 09:06:11.886116  378582 certs.go:386] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.key.d2ac21cd -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.key
	I1227 09:06:11.886163  378582 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/proxy-client.key
	I1227 09:06:11.886190  378582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/proxy-client.crt with IP's: []
	I1227 09:06:12.028314  378582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/proxy-client.crt ...
	I1227 09:06:12.028342  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/proxy-client.crt: {Name:mk0b2c764ab8123e55f9f4d3004acf0458e5fea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:12.028508  378582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/proxy-client.key ...
	I1227 09:06:12.028525  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/proxy-client.key: {Name:mke6f4b7056051b8e63bfb2bd3df0fa10ecd5c56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:12.028735  378582 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:06:12.028773  378582 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:06:12.028824  378582 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:06:12.028852  378582 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:06:12.029487  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:06:12.047456  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:06:12.063863  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:06:12.080192  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:06:12.096120  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 09:06:12.113031  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:06:12.129643  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:06:12.145842  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:06:12.161805  378582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:06:12.179607  378582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:06:12.191089  378582 ssh_runner.go:195] Run: openssl version
	I1227 09:06:12.196717  378582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:06:12.203478  378582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:06:12.212127  378582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:06:12.215462  378582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:06:12.215512  378582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:06:12.248319  378582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:06:12.255244  378582 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:06:12.262220  378582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:06:12.265533  378582 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:06:12.265590  378582 kubeadm.go:401] StartCluster: {Name:addons-102660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-102660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:06:12.265681  378582 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:06:12.265756  378582 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:06:12.291546  378582 cri.go:96] found id: ""
	I1227 09:06:12.291608  378582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:06:12.298913  378582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:06:12.306175  378582 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:06:12.306237  378582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:06:12.313269  378582 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:06:12.313283  378582 kubeadm.go:158] found existing configuration files:
	
	I1227 09:06:12.313314  378582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:06:12.320673  378582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:06:12.320719  378582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:06:12.327329  378582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:06:12.334030  378582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:06:12.334071  378582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:06:12.340564  378582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:06:12.347459  378582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:06:12.347514  378582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:06:12.354278  378582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:06:12.361183  378582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:06:12.361218  378582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:06:12.367943  378582 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:06:12.401859  378582 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:06:12.401956  378582 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:06:12.463001  378582 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:06:12.463126  378582 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 09:06:12.463196  378582 kubeadm.go:319] OS: Linux
	I1227 09:06:12.463267  378582 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:06:12.463342  378582 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:06:12.463429  378582 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:06:12.463509  378582 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:06:12.463582  378582 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:06:12.463646  378582 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:06:12.463707  378582 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:06:12.463770  378582 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 09:06:12.519577  378582 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:06:12.519741  378582 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:06:12.519921  378582 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:06:12.528563  378582 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:06:12.530301  378582 out.go:252]   - Generating certificates and keys ...
	I1227 09:06:12.530378  378582 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:06:12.530447  378582 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:06:12.584205  378582 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:06:12.629087  378582 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:06:12.762279  378582 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:06:12.791095  378582 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:06:12.914638  378582 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:06:12.914803  378582 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-102660 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1227 09:06:12.989248  378582 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:06:12.989433  378582 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-102660 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1227 09:06:13.282723  378582 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:06:13.328357  378582 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:06:13.353012  378582 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:06:13.353169  378582 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:06:13.388395  378582 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:06:13.456729  378582 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:06:13.608634  378582 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:06:13.640869  378582 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:06:13.718993  378582 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:06:13.719565  378582 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:06:13.723890  378582 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:06:13.725137  378582 out.go:252]   - Booting up control plane ...
	I1227 09:06:13.725259  378582 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:06:13.725348  378582 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:06:13.726151  378582 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:06:13.739084  378582 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:06:13.739200  378582 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:06:13.745484  378582 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:06:13.745703  378582 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:06:13.745744  378582 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:06:13.837864  378582 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:06:13.838012  378582 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:06:14.339479  378582 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.884724ms
	I1227 09:06:14.343426  378582 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 09:06:14.343565  378582 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1227 09:06:14.343709  378582 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 09:06:14.343863  378582 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 09:06:14.848157  378582 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 504.580849ms
	I1227 09:06:15.706710  378582 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.363209707s
	I1227 09:06:17.345623  378582 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.002164263s
	I1227 09:06:17.361578  378582 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 09:06:17.370542  378582 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 09:06:17.377822  378582 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 09:06:17.378105  378582 kubeadm.go:319] [mark-control-plane] Marking the node addons-102660 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 09:06:17.386210  378582 kubeadm.go:319] [bootstrap-token] Using token: 67dny8.h6zw46go2y1jmxzm
	I1227 09:06:17.387815  378582 out.go:252]   - Configuring RBAC rules ...
	I1227 09:06:17.387952  378582 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 09:06:17.390546  378582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 09:06:17.395119  378582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 09:06:17.397404  378582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 09:06:17.399496  378582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 09:06:17.402254  378582 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 09:06:17.750571  378582 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 09:06:18.164714  378582 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 09:06:18.751957  378582 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 09:06:18.752877  378582 kubeadm.go:319] 
	I1227 09:06:18.753000  378582 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 09:06:18.753016  378582 kubeadm.go:319] 
	I1227 09:06:18.753082  378582 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 09:06:18.753087  378582 kubeadm.go:319] 
	I1227 09:06:18.753107  378582 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 09:06:18.753155  378582 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 09:06:18.753241  378582 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 09:06:18.753259  378582 kubeadm.go:319] 
	I1227 09:06:18.753348  378582 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 09:06:18.753360  378582 kubeadm.go:319] 
	I1227 09:06:18.753428  378582 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 09:06:18.753434  378582 kubeadm.go:319] 
	I1227 09:06:18.753475  378582 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 09:06:18.753542  378582 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 09:06:18.753600  378582 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 09:06:18.753606  378582 kubeadm.go:319] 
	I1227 09:06:18.753676  378582 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 09:06:18.753750  378582 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 09:06:18.753762  378582 kubeadm.go:319] 
	I1227 09:06:18.753891  378582 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 67dny8.h6zw46go2y1jmxzm \
	I1227 09:06:18.754024  378582 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 \
	I1227 09:06:18.754073  378582 kubeadm.go:319] 	--control-plane 
	I1227 09:06:18.754081  378582 kubeadm.go:319] 
	I1227 09:06:18.754175  378582 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 09:06:18.754183  378582 kubeadm.go:319] 
	I1227 09:06:18.754256  378582 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 67dny8.h6zw46go2y1jmxzm \
	I1227 09:06:18.754381  378582 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 
	I1227 09:06:18.756316  378582 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 09:06:18.756429  378582 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:06:18.756463  378582 cni.go:84] Creating CNI manager for ""
	I1227 09:06:18.756477  378582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:06:18.757814  378582 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 09:06:18.758962  378582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 09:06:18.763138  378582 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 09:06:18.763156  378582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 09:06:18.776690  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 09:06:18.968077  378582 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 09:06:18.968163  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:06:18.968197  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-102660 minikube.k8s.io/updated_at=2025_12_27T09_06_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=addons-102660 minikube.k8s.io/primary=true
	I1227 09:06:18.977465  378582 ops.go:34] apiserver oom_adj: -16
	I1227 09:06:19.050755  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:06:19.551584  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:06:20.051238  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:06:20.551912  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:06:21.051540  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:06:21.551542  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:06:22.051675  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:06:22.551043  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:06:23.051219  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:06:23.551171  378582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:06:23.612414  378582 kubeadm.go:1114] duration metric: took 4.644315335s to wait for elevateKubeSystemPrivileges
	I1227 09:06:23.612459  378582 kubeadm.go:403] duration metric: took 11.346875135s to StartCluster
	I1227 09:06:23.612484  378582 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:23.612598  378582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:06:23.612975  378582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:06:23.613174  378582 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:06:23.613239  378582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 09:06:23.613236  378582 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1227 09:06:23.613368  378582 addons.go:70] Setting cloud-spanner=true in profile "addons-102660"
	I1227 09:06:23.613390  378582 addons.go:239] Setting addon cloud-spanner=true in "addons-102660"
	I1227 09:06:23.613394  378582 addons.go:70] Setting yakd=true in profile "addons-102660"
	I1227 09:06:23.613416  378582 addons.go:239] Setting addon yakd=true in "addons-102660"
	I1227 09:06:23.613430  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.613434  378582 addons.go:70] Setting default-storageclass=true in profile "addons-102660"
	I1227 09:06:23.613428  378582 addons.go:70] Setting ingress-dns=true in profile "addons-102660"
	I1227 09:06:23.613467  378582 addons.go:239] Setting addon ingress-dns=true in "addons-102660"
	I1227 09:06:23.613481  378582 addons.go:70] Setting inspektor-gadget=true in profile "addons-102660"
	I1227 09:06:23.613506  378582 addons.go:239] Setting addon inspektor-gadget=true in "addons-102660"
	I1227 09:06:23.613491  378582 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-102660"
	I1227 09:06:23.613520  378582 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-102660"
	I1227 09:06:23.613535  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.613544  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.613549  378582 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-102660"
	I1227 09:06:23.613566  378582 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-102660"
	I1227 09:06:23.613539  378582 addons.go:70] Setting gcp-auth=true in profile "addons-102660"
	I1227 09:06:23.613589  378582 mustload.go:66] Loading cluster: addons-102660
	I1227 09:06:23.613593  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.613598  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.613724  378582 addons.go:70] Setting registry-creds=true in profile "addons-102660"
	I1227 09:06:23.613745  378582 addons.go:239] Setting addon registry-creds=true in "addons-102660"
	I1227 09:06:23.613777  378582 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-102660"
	I1227 09:06:23.613783  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.613804  378582 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-102660"
	I1227 09:06:23.613834  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.613854  378582 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:06:23.614058  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.614058  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.614064  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.614144  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.614151  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.614172  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.614271  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.614274  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.613415  378582 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:06:23.615586  378582 addons.go:70] Setting storage-provisioner=true in profile "addons-102660"
	I1227 09:06:23.615623  378582 addons.go:239] Setting addon storage-provisioner=true in "addons-102660"
	I1227 09:06:23.615667  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.615717  378582 out.go:179] * Verifying Kubernetes components...
	I1227 09:06:23.616113  378582 addons.go:70] Setting registry=true in profile "addons-102660"
	I1227 09:06:23.616160  378582 addons.go:239] Setting addon registry=true in "addons-102660"
	I1227 09:06:23.616199  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.616731  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.613506  378582 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-102660"
	I1227 09:06:23.616866  378582 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-102660"
	I1227 09:06:23.616889  378582 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-102660"
	I1227 09:06:23.616977  378582 addons.go:70] Setting ingress=true in profile "addons-102660"
	I1227 09:06:23.616995  378582 addons.go:239] Setting addon ingress=true in "addons-102660"
	I1227 09:06:23.617029  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.617335  378582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:06:23.617464  378582 addons.go:70] Setting volcano=true in profile "addons-102660"
	I1227 09:06:23.617486  378582 addons.go:239] Setting addon volcano=true in "addons-102660"
	I1227 09:06:23.617532  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.617637  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.617727  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.618277  378582 addons.go:70] Setting volumesnapshots=true in profile "addons-102660"
	I1227 09:06:23.618306  378582 addons.go:239] Setting addon volumesnapshots=true in "addons-102660"
	I1227 09:06:23.618335  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.618660  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.619526  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.613467  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.622624  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.625692  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.626986  378582 addons.go:70] Setting metrics-server=true in profile "addons-102660"
	I1227 09:06:23.627025  378582 addons.go:239] Setting addon metrics-server=true in "addons-102660"
	I1227 09:06:23.627060  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.627577  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.627739  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.672228  378582 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1227 09:06:23.675225  378582 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1227 09:06:23.675249  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1227 09:06:23.675316  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.685870  378582 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1227 09:06:23.691885  378582 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1227 09:06:23.691912  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1227 09:06:23.691978  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.694497  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.701587  378582 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1227 09:06:23.702335  378582 out.go:179]   - Using image docker.io/registry:3.0.0
	I1227 09:06:23.702403  378582 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1227 09:06:23.703538  378582 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1227 09:06:23.703557  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1227 09:06:23.703622  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.703996  378582 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1227 09:06:23.704207  378582 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1227 09:06:23.704221  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1227 09:06:23.704268  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.704988  378582 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1227 09:06:23.705005  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1227 09:06:23.705050  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.705543  378582 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1227 09:06:23.710455  378582 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1227 09:06:23.710481  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1227 09:06:23.710585  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	W1227 09:06:23.716624  378582 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1227 09:06:23.719968  378582 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-102660"
	I1227 09:06:23.720045  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.720912  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.722639  378582 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1227 09:06:23.723316  378582 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.7
	I1227 09:06:23.726876  378582 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1227 09:06:23.726895  378582 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1227 09:06:23.726976  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.727180  378582 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1227 09:06:23.728848  378582 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1227 09:06:23.730083  378582 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1227 09:06:23.730909  378582 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 09:06:23.732400  378582 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 09:06:23.732697  378582 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1227 09:06:23.734078  378582 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1227 09:06:23.734104  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I1227 09:06:23.734169  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.735056  378582 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1227 09:06:23.737353  378582 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1227 09:06:23.742040  378582 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1227 09:06:23.745059  378582 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1227 09:06:23.746274  378582 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1227 09:06:23.746295  378582 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1227 09:06:23.746414  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.754923  378582 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:06:23.755609  378582 addons.go:239] Setting addon default-storageclass=true in "addons-102660"
	I1227 09:06:23.755693  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:23.756201  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:23.758268  378582 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:06:23.758324  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:06:23.759302  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.762703  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.766914  378582 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1227 09:06:23.767266  378582 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1227 09:06:23.768195  378582 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1227 09:06:23.768408  378582 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1227 09:06:23.768433  378582 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1227 09:06:23.768511  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.769494  378582 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1227 09:06:23.769521  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1227 09:06:23.769580  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.770839  378582 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1227 09:06:23.770856  378582 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1227 09:06:23.770905  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.771238  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.787461  378582 out.go:179]   - Using image docker.io/busybox:stable
	I1227 09:06:23.789710  378582 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1227 09:06:23.793762  378582 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1227 09:06:23.793866  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1227 09:06:23.793967  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.801094  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.803334  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.803471  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.804500  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.808510  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.815973  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.819448  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.819864  378582 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:06:23.819938  378582 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:06:23.820368  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:23.827228  378582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 09:06:23.828251  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.834087  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.837325  378582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:06:23.839991  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	W1227 09:06:23.842725  378582 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1227 09:06:23.843914  378582 retry.go:84] will retry after 300ms: ssh: handshake failed: EOF
	I1227 09:06:23.846883  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.855746  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	W1227 09:06:23.859946  378582 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1227 09:06:23.876014  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:23.969635  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1227 09:06:23.979904  378582 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1227 09:06:23.979927  378582 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1227 09:06:23.979970  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1227 09:06:23.980034  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1227 09:06:23.984214  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:06:23.996011  378582 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1227 09:06:23.996039  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1227 09:06:24.004384  378582 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1227 09:06:24.004413  378582 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1227 09:06:24.006627  378582 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1227 09:06:24.006648  378582 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1227 09:06:24.011280  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1227 09:06:24.017132  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1227 09:06:24.018566  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1227 09:06:24.021562  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1227 09:06:24.026168  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:06:24.028890  378582 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1227 09:06:24.028912  378582 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1227 09:06:24.029811  378582 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1227 09:06:24.029837  378582 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1227 09:06:24.039402  378582 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1227 09:06:24.039427  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1227 09:06:24.059958  378582 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1227 09:06:24.059985  378582 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1227 09:06:24.091071  378582 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 09:06:24.091177  378582 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1227 09:06:24.104613  378582 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1227 09:06:24.104734  378582 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1227 09:06:24.111813  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1227 09:06:24.120569  378582 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1227 09:06:24.120659  378582 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1227 09:06:24.148445  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 09:06:24.163057  378582 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1227 09:06:24.163083  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I1227 09:06:24.202215  378582 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1227 09:06:24.202244  378582 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1227 09:06:24.208027  378582 node_ready.go:35] waiting up to 6m0s for node "addons-102660" to be "Ready" ...
	I1227 09:06:24.209327  378582 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1227 09:06:24.222625  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1227 09:06:24.316277  378582 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1227 09:06:24.316309  378582 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1227 09:06:24.357548  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1227 09:06:24.369771  378582 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1227 09:06:24.369826  378582 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1227 09:06:24.379012  378582 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1227 09:06:24.379044  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1227 09:06:24.413676  378582 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1227 09:06:24.413771  378582 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1227 09:06:24.428936  378582 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1227 09:06:24.428970  378582 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1227 09:06:24.481194  378582 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1227 09:06:24.481232  378582 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1227 09:06:24.510990  378582 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1227 09:06:24.511016  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1227 09:06:24.533893  378582 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1227 09:06:24.533920  378582 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1227 09:06:24.563958  378582 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1227 09:06:24.563982  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1227 09:06:24.599059  378582 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 09:06:24.599142  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1227 09:06:24.608289  378582 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1227 09:06:24.608363  378582 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1227 09:06:24.635019  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 09:06:24.650212  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1227 09:06:24.714756  378582 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-102660" context rescaled to 1 replicas
	I1227 09:06:25.230917  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.250847707s)
	I1227 09:06:25.230961  378582 addons.go:495] Verifying addon ingress=true in "addons-102660"
	I1227 09:06:25.230968  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.246713974s)
	I1227 09:06:25.231079  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.21392488s)
	I1227 09:06:25.231042  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.219736588s)
	I1227 09:06:25.231115  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.212529487s)
	I1227 09:06:25.231161  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.209576777s)
	I1227 09:06:25.231199  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.204980801s)
	I1227 09:06:25.231234  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.119393611s)
	I1227 09:06:25.231251  378582 addons.go:495] Verifying addon registry=true in "addons-102660"
	I1227 09:06:25.231388  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.082889286s)
	I1227 09:06:25.231411  378582 addons.go:495] Verifying addon metrics-server=true in "addons-102660"
	I1227 09:06:25.231458  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.008800268s)
	I1227 09:06:25.232939  378582 out.go:179] * Verifying ingress addon...
	I1227 09:06:25.232939  378582 out.go:179] * Verifying registry addon...
	I1227 09:06:25.234022  378582 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-102660 service yakd-dashboard -n yakd-dashboard
	
	I1227 09:06:25.235583  378582 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1227 09:06:25.235583  378582 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1227 09:06:25.238395  378582 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1227 09:06:25.238406  378582 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1227 09:06:25.238421  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:25.238609  378582 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1227 09:06:25.238628  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:25.739658  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:25.739747  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:25.837252  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.202189282s)
	W1227 09:06:25.837325  378582 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1227 09:06:25.837590  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.187334891s)
	I1227 09:06:25.837628  378582 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-102660"
	I1227 09:06:25.838982  378582 out.go:179] * Verifying csi-hostpath-driver addon...
	I1227 09:06:25.841147  378582 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1227 09:06:25.844289  378582 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1227 09:06:25.844318  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:26.047088  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1227 09:06:26.211589  378582 node_ready.go:57] node "addons-102660" has "Ready":"False" status (will retry)
	I1227 09:06:26.240106  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:26.240299  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:26.344780  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:26.738779  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:26.738946  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:26.844605  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:27.239151  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:27.239410  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:27.344409  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:27.739049  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:27.739218  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:27.843836  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:28.239049  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:28.239256  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:28.344291  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:28.583921  378582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.536788343s)
	W1227 09:06:28.710394  378582 node_ready.go:57] node "addons-102660" has "Ready":"False" status (will retry)
	I1227 09:06:28.738779  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:28.738994  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:28.845030  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:29.238391  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:29.238532  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:29.344677  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:29.739092  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:29.739183  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:29.844644  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:30.238666  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:30.238803  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:30.344074  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1227 09:06:30.710950  378582 node_ready.go:57] node "addons-102660" has "Ready":"False" status (will retry)
	I1227 09:06:30.738587  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:30.738830  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:30.844351  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:31.239486  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:31.239602  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:31.304321  378582 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1227 09:06:31.304408  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:31.322083  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:31.344280  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:31.418170  378582 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1227 09:06:31.430367  378582 addons.go:239] Setting addon gcp-auth=true in "addons-102660"
	I1227 09:06:31.430423  378582 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:06:31.430771  378582 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:06:31.448177  378582 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1227 09:06:31.448228  378582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:06:31.465140  378582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:06:31.553398  378582 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1227 09:06:31.554667  378582 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 09:06:31.555824  378582 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1227 09:06:31.555843  378582 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1227 09:06:31.568598  378582 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1227 09:06:31.568616  378582 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1227 09:06:31.581538  378582 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1227 09:06:31.581560  378582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1227 09:06:31.594262  378582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1227 09:06:31.738817  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:31.739018  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:31.844360  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:31.896809  378582 addons.go:495] Verifying addon gcp-auth=true in "addons-102660"
	I1227 09:06:31.898329  378582 out.go:179] * Verifying gcp-auth addon...
	I1227 09:06:31.900128  378582 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1227 09:06:31.945222  378582 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1227 09:06:31.945244  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:32.238643  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:32.238937  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:32.344024  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:32.403635  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1227 09:06:32.711543  378582 node_ready.go:57] node "addons-102660" has "Ready":"False" status (will retry)
	I1227 09:06:32.739146  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:32.739596  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:32.844555  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:32.903026  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:33.239055  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:33.239235  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:33.343867  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:33.403553  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:33.738920  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:33.739123  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:33.844831  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:33.903638  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:34.238882  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:34.238973  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:34.344639  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:34.402849  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1227 09:06:34.711812  378582 node_ready.go:57] node "addons-102660" has "Ready":"False" status (will retry)
	I1227 09:06:34.738108  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:34.738330  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:34.843839  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:34.903026  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:35.239777  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:35.239912  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:35.344432  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:35.402613  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:35.738722  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:35.738964  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:35.844224  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:35.903771  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:36.238319  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:36.238340  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:36.343785  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:36.403589  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:36.738835  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:36.738913  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:36.844361  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:36.902614  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:37.211368  378582 node_ready.go:49] node "addons-102660" is "Ready"
	I1227 09:06:37.211402  378582 node_ready.go:38] duration metric: took 13.003335005s for node "addons-102660" to be "Ready" ...
	I1227 09:06:37.211421  378582 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:06:37.211480  378582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:06:37.231675  378582 api_server.go:72] duration metric: took 13.618470756s to wait for apiserver process to appear ...
	I1227 09:06:37.231710  378582 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:06:37.231733  378582 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 09:06:37.241977  378582 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 09:06:37.242591  378582 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1227 09:06:37.242615  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:37.242886  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:37.244726  378582 api_server.go:141] control plane version: v1.35.0
	I1227 09:06:37.244767  378582 api_server.go:131] duration metric: took 13.048841ms to wait for apiserver health ...
	I1227 09:06:37.244779  378582 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:06:37.250085  378582 system_pods.go:59] 20 kube-system pods found
	I1227 09:06:37.250123  378582 system_pods.go:61] "amd-gpu-device-plugin-77gfj" [52ad5567-bfb5-4109-891b-f498cd21d1b5] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1227 09:06:37.250134  378582 system_pods.go:61] "coredns-7d764666f9-n79db" [826397b3-1b52-45d1-9735-507ceb73aaea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:06:37.250145  378582 system_pods.go:61] "csi-hostpath-attacher-0" [11a05a2a-67de-44b2-85f1-ad9b5b2b694d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:06:37.250153  378582 system_pods.go:61] "csi-hostpath-resizer-0" [4aab1079-87b3-4497-8f6b-0a06f67ab52b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 09:06:37.250161  378582 system_pods.go:61] "csi-hostpathplugin-fql4d" [1bb9abaa-2e48-46e9-aeb2-cac8b988f1ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:06:37.250167  378582 system_pods.go:61] "etcd-addons-102660" [7177a3f7-9b65-4ed0-817d-ee060d171b90] Running
	I1227 09:06:37.250172  378582 system_pods.go:61] "kindnet-cvqk2" [79461f15-4549-461a-9a0f-d2546f89cae1] Running
	I1227 09:06:37.250177  378582 system_pods.go:61] "kube-apiserver-addons-102660" [1fcc2377-3632-4186-bfd7-1ceb50a1fbf0] Running
	I1227 09:06:37.250182  378582 system_pods.go:61] "kube-controller-manager-addons-102660" [3215fe83-382f-438e-96c0-e04b24e24ba7] Running
	I1227 09:06:37.250196  378582 system_pods.go:61] "kube-ingress-dns-minikube" [8a0b0315-a9e7-4c8e-8c0e-20946d53cfba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:06:37.250202  378582 system_pods.go:61] "kube-proxy-8mwfh" [e667788b-1c13-49ac-921e-2bca2b55463c] Running
	I1227 09:06:37.250210  378582 system_pods.go:61] "kube-scheduler-addons-102660" [ad586882-884a-4890-9bcd-a9e987a685ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:06:37.250218  378582 system_pods.go:61] "metrics-server-5778bb4788-qh2dz" [f956880f-1ade-4d76-95bd-0926d1bbefc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:06:37.250226  378582 system_pods.go:61] "nvidia-device-plugin-daemonset-4jxql" [80e6633f-3dd4-4516-a8c2-040e279afab9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 09:06:37.250233  378582 system_pods.go:61] "registry-788cd7d5bc-k2xwn" [d69e8f29-5ac8-42ea-977a-4f5c22f21b1d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:06:37.250241  378582 system_pods.go:61] "registry-creds-567fb78d95-42kx7" [b3517465-58fb-4d30-a2f4-22c3093a3ade] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:06:37.250251  378582 system_pods.go:61] "registry-proxy-m2g76" [67fbed3e-1c42-4176-9b92-f3448feddb21] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 09:06:37.250259  378582 system_pods.go:61] "snapshot-controller-6588d87457-52l8t" [8c70d3c9-bf2d-4ea8-b5df-b79802f949f9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:06:37.250268  378582 system_pods.go:61] "snapshot-controller-6588d87457-n2hhv" [9a551cad-0f02-4e06-9730-ed73f3dc9727] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:06:37.250275  378582 system_pods.go:61] "storage-provisioner" [ef268d4f-41fc-475d-a277-665bcf5e2e93] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:06:37.250283  378582 system_pods.go:74] duration metric: took 5.497423ms to wait for pod list to return data ...
	I1227 09:06:37.250293  378582 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:06:37.252697  378582 default_sa.go:45] found service account: "default"
	I1227 09:06:37.252720  378582 default_sa.go:55] duration metric: took 2.41929ms for default service account to be created ...
	I1227 09:06:37.252730  378582 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:06:37.256705  378582 system_pods.go:86] 20 kube-system pods found
	I1227 09:06:37.256737  378582 system_pods.go:89] "amd-gpu-device-plugin-77gfj" [52ad5567-bfb5-4109-891b-f498cd21d1b5] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1227 09:06:37.256744  378582 system_pods.go:89] "coredns-7d764666f9-n79db" [826397b3-1b52-45d1-9735-507ceb73aaea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:06:37.256753  378582 system_pods.go:89] "csi-hostpath-attacher-0" [11a05a2a-67de-44b2-85f1-ad9b5b2b694d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:06:37.256763  378582 system_pods.go:89] "csi-hostpath-resizer-0" [4aab1079-87b3-4497-8f6b-0a06f67ab52b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 09:06:37.256780  378582 system_pods.go:89] "csi-hostpathplugin-fql4d" [1bb9abaa-2e48-46e9-aeb2-cac8b988f1ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:06:37.256803  378582 system_pods.go:89] "etcd-addons-102660" [7177a3f7-9b65-4ed0-817d-ee060d171b90] Running
	I1227 09:06:37.256813  378582 system_pods.go:89] "kindnet-cvqk2" [79461f15-4549-461a-9a0f-d2546f89cae1] Running
	I1227 09:06:37.256819  378582 system_pods.go:89] "kube-apiserver-addons-102660" [1fcc2377-3632-4186-bfd7-1ceb50a1fbf0] Running
	I1227 09:06:37.256828  378582 system_pods.go:89] "kube-controller-manager-addons-102660" [3215fe83-382f-438e-96c0-e04b24e24ba7] Running
	I1227 09:06:37.256836  378582 system_pods.go:89] "kube-ingress-dns-minikube" [8a0b0315-a9e7-4c8e-8c0e-20946d53cfba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:06:37.256842  378582 system_pods.go:89] "kube-proxy-8mwfh" [e667788b-1c13-49ac-921e-2bca2b55463c] Running
	I1227 09:06:37.256858  378582 system_pods.go:89] "kube-scheduler-addons-102660" [ad586882-884a-4890-9bcd-a9e987a685ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:06:37.256871  378582 system_pods.go:89] "metrics-server-5778bb4788-qh2dz" [f956880f-1ade-4d76-95bd-0926d1bbefc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:06:37.256884  378582 system_pods.go:89] "nvidia-device-plugin-daemonset-4jxql" [80e6633f-3dd4-4516-a8c2-040e279afab9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 09:06:37.256895  378582 system_pods.go:89] "registry-788cd7d5bc-k2xwn" [d69e8f29-5ac8-42ea-977a-4f5c22f21b1d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:06:37.256905  378582 system_pods.go:89] "registry-creds-567fb78d95-42kx7" [b3517465-58fb-4d30-a2f4-22c3093a3ade] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:06:37.256913  378582 system_pods.go:89] "registry-proxy-m2g76" [67fbed3e-1c42-4176-9b92-f3448feddb21] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 09:06:37.256925  378582 system_pods.go:89] "snapshot-controller-6588d87457-52l8t" [8c70d3c9-bf2d-4ea8-b5df-b79802f949f9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:06:37.256937  378582 system_pods.go:89] "snapshot-controller-6588d87457-n2hhv" [9a551cad-0f02-4e06-9730-ed73f3dc9727] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:06:37.256944  378582 system_pods.go:89] "storage-provisioner" [ef268d4f-41fc-475d-a277-665bcf5e2e93] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:06:37.256969  378582 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 09:06:37.346991  378582 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1227 09:06:37.347021  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:37.447227  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:37.549760  378582 system_pods.go:86] 20 kube-system pods found
	I1227 09:06:37.549817  378582 system_pods.go:89] "amd-gpu-device-plugin-77gfj" [52ad5567-bfb5-4109-891b-f498cd21d1b5] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1227 09:06:37.549830  378582 system_pods.go:89] "coredns-7d764666f9-n79db" [826397b3-1b52-45d1-9735-507ceb73aaea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:06:37.549843  378582 system_pods.go:89] "csi-hostpath-attacher-0" [11a05a2a-67de-44b2-85f1-ad9b5b2b694d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:06:37.549855  378582 system_pods.go:89] "csi-hostpath-resizer-0" [4aab1079-87b3-4497-8f6b-0a06f67ab52b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 09:06:37.549867  378582 system_pods.go:89] "csi-hostpathplugin-fql4d" [1bb9abaa-2e48-46e9-aeb2-cac8b988f1ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:06:37.549876  378582 system_pods.go:89] "etcd-addons-102660" [7177a3f7-9b65-4ed0-817d-ee060d171b90] Running
	I1227 09:06:37.549882  378582 system_pods.go:89] "kindnet-cvqk2" [79461f15-4549-461a-9a0f-d2546f89cae1] Running
	I1227 09:06:37.549891  378582 system_pods.go:89] "kube-apiserver-addons-102660" [1fcc2377-3632-4186-bfd7-1ceb50a1fbf0] Running
	I1227 09:06:37.549900  378582 system_pods.go:89] "kube-controller-manager-addons-102660" [3215fe83-382f-438e-96c0-e04b24e24ba7] Running
	I1227 09:06:37.549908  378582 system_pods.go:89] "kube-ingress-dns-minikube" [8a0b0315-a9e7-4c8e-8c0e-20946d53cfba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:06:37.549916  378582 system_pods.go:89] "kube-proxy-8mwfh" [e667788b-1c13-49ac-921e-2bca2b55463c] Running
	I1227 09:06:37.549921  378582 system_pods.go:89] "kube-scheduler-addons-102660" [ad586882-884a-4890-9bcd-a9e987a685ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:06:37.549931  378582 system_pods.go:89] "metrics-server-5778bb4788-qh2dz" [f956880f-1ade-4d76-95bd-0926d1bbefc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:06:37.549943  378582 system_pods.go:89] "nvidia-device-plugin-daemonset-4jxql" [80e6633f-3dd4-4516-a8c2-040e279afab9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 09:06:37.549955  378582 system_pods.go:89] "registry-788cd7d5bc-k2xwn" [d69e8f29-5ac8-42ea-977a-4f5c22f21b1d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:06:37.549984  378582 system_pods.go:89] "registry-creds-567fb78d95-42kx7" [b3517465-58fb-4d30-a2f4-22c3093a3ade] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:06:37.549998  378582 system_pods.go:89] "registry-proxy-m2g76" [67fbed3e-1c42-4176-9b92-f3448feddb21] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 09:06:37.550006  378582 system_pods.go:89] "snapshot-controller-6588d87457-52l8t" [8c70d3c9-bf2d-4ea8-b5df-b79802f949f9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:06:37.550015  378582 system_pods.go:89] "snapshot-controller-6588d87457-n2hhv" [9a551cad-0f02-4e06-9730-ed73f3dc9727] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:06:37.550025  378582 system_pods.go:89] "storage-provisioner" [ef268d4f-41fc-475d-a277-665bcf5e2e93] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:06:37.740872  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:37.740938  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:37.845208  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:37.888762  378582 system_pods.go:86] 20 kube-system pods found
	I1227 09:06:37.888819  378582 system_pods.go:89] "amd-gpu-device-plugin-77gfj" [52ad5567-bfb5-4109-891b-f498cd21d1b5] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1227 09:06:37.888830  378582 system_pods.go:89] "coredns-7d764666f9-n79db" [826397b3-1b52-45d1-9735-507ceb73aaea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:06:37.888842  378582 system_pods.go:89] "csi-hostpath-attacher-0" [11a05a2a-67de-44b2-85f1-ad9b5b2b694d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:06:37.888850  378582 system_pods.go:89] "csi-hostpath-resizer-0" [4aab1079-87b3-4497-8f6b-0a06f67ab52b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 09:06:37.888858  378582 system_pods.go:89] "csi-hostpathplugin-fql4d" [1bb9abaa-2e48-46e9-aeb2-cac8b988f1ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:06:37.888873  378582 system_pods.go:89] "etcd-addons-102660" [7177a3f7-9b65-4ed0-817d-ee060d171b90] Running
	I1227 09:06:37.888881  378582 system_pods.go:89] "kindnet-cvqk2" [79461f15-4549-461a-9a0f-d2546f89cae1] Running
	I1227 09:06:37.888887  378582 system_pods.go:89] "kube-apiserver-addons-102660" [1fcc2377-3632-4186-bfd7-1ceb50a1fbf0] Running
	I1227 09:06:37.888892  378582 system_pods.go:89] "kube-controller-manager-addons-102660" [3215fe83-382f-438e-96c0-e04b24e24ba7] Running
	I1227 09:06:37.888901  378582 system_pods.go:89] "kube-ingress-dns-minikube" [8a0b0315-a9e7-4c8e-8c0e-20946d53cfba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:06:37.888907  378582 system_pods.go:89] "kube-proxy-8mwfh" [e667788b-1c13-49ac-921e-2bca2b55463c] Running
	I1227 09:06:37.888913  378582 system_pods.go:89] "kube-scheduler-addons-102660" [ad586882-884a-4890-9bcd-a9e987a685ff] Running
	I1227 09:06:37.888920  378582 system_pods.go:89] "metrics-server-5778bb4788-qh2dz" [f956880f-1ade-4d76-95bd-0926d1bbefc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:06:37.888937  378582 system_pods.go:89] "nvidia-device-plugin-daemonset-4jxql" [80e6633f-3dd4-4516-a8c2-040e279afab9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 09:06:37.888945  378582 system_pods.go:89] "registry-788cd7d5bc-k2xwn" [d69e8f29-5ac8-42ea-977a-4f5c22f21b1d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:06:37.888953  378582 system_pods.go:89] "registry-creds-567fb78d95-42kx7" [b3517465-58fb-4d30-a2f4-22c3093a3ade] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:06:37.888961  378582 system_pods.go:89] "registry-proxy-m2g76" [67fbed3e-1c42-4176-9b92-f3448feddb21] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 09:06:37.888972  378582 system_pods.go:89] "snapshot-controller-6588d87457-52l8t" [8c70d3c9-bf2d-4ea8-b5df-b79802f949f9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:06:37.888983  378582 system_pods.go:89] "snapshot-controller-6588d87457-n2hhv" [9a551cad-0f02-4e06-9730-ed73f3dc9727] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:06:37.888994  378582 system_pods.go:89] "storage-provisioner" [ef268d4f-41fc-475d-a277-665bcf5e2e93] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:06:37.903776  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:38.239839  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:38.239952  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:38.314598  378582 system_pods.go:86] 20 kube-system pods found
	I1227 09:06:38.314637  378582 system_pods.go:89] "amd-gpu-device-plugin-77gfj" [52ad5567-bfb5-4109-891b-f498cd21d1b5] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1227 09:06:38.314645  378582 system_pods.go:89] "coredns-7d764666f9-n79db" [826397b3-1b52-45d1-9735-507ceb73aaea] Running
	I1227 09:06:38.314654  378582 system_pods.go:89] "csi-hostpath-attacher-0" [11a05a2a-67de-44b2-85f1-ad9b5b2b694d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:06:38.314663  378582 system_pods.go:89] "csi-hostpath-resizer-0" [4aab1079-87b3-4497-8f6b-0a06f67ab52b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 09:06:38.314671  378582 system_pods.go:89] "csi-hostpathplugin-fql4d" [1bb9abaa-2e48-46e9-aeb2-cac8b988f1ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:06:38.314677  378582 system_pods.go:89] "etcd-addons-102660" [7177a3f7-9b65-4ed0-817d-ee060d171b90] Running
	I1227 09:06:38.314685  378582 system_pods.go:89] "kindnet-cvqk2" [79461f15-4549-461a-9a0f-d2546f89cae1] Running
	I1227 09:06:38.314690  378582 system_pods.go:89] "kube-apiserver-addons-102660" [1fcc2377-3632-4186-bfd7-1ceb50a1fbf0] Running
	I1227 09:06:38.314696  378582 system_pods.go:89] "kube-controller-manager-addons-102660" [3215fe83-382f-438e-96c0-e04b24e24ba7] Running
	I1227 09:06:38.314706  378582 system_pods.go:89] "kube-ingress-dns-minikube" [8a0b0315-a9e7-4c8e-8c0e-20946d53cfba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:06:38.314711  378582 system_pods.go:89] "kube-proxy-8mwfh" [e667788b-1c13-49ac-921e-2bca2b55463c] Running
	I1227 09:06:38.314718  378582 system_pods.go:89] "kube-scheduler-addons-102660" [ad586882-884a-4890-9bcd-a9e987a685ff] Running
	I1227 09:06:38.314727  378582 system_pods.go:89] "metrics-server-5778bb4788-qh2dz" [f956880f-1ade-4d76-95bd-0926d1bbefc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:06:38.314736  378582 system_pods.go:89] "nvidia-device-plugin-daemonset-4jxql" [80e6633f-3dd4-4516-a8c2-040e279afab9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 09:06:38.314744  378582 system_pods.go:89] "registry-788cd7d5bc-k2xwn" [d69e8f29-5ac8-42ea-977a-4f5c22f21b1d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:06:38.314752  378582 system_pods.go:89] "registry-creds-567fb78d95-42kx7" [b3517465-58fb-4d30-a2f4-22c3093a3ade] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:06:38.314760  378582 system_pods.go:89] "registry-proxy-m2g76" [67fbed3e-1c42-4176-9b92-f3448feddb21] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 09:06:38.314771  378582 system_pods.go:89] "snapshot-controller-6588d87457-52l8t" [8c70d3c9-bf2d-4ea8-b5df-b79802f949f9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:06:38.314780  378582 system_pods.go:89] "snapshot-controller-6588d87457-n2hhv" [9a551cad-0f02-4e06-9730-ed73f3dc9727] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:06:38.314804  378582 system_pods.go:89] "storage-provisioner" [ef268d4f-41fc-475d-a277-665bcf5e2e93] Running
	I1227 09:06:38.314814  378582 system_pods.go:126] duration metric: took 1.062077793s to wait for k8s-apps to be running ...
	I1227 09:06:38.314831  378582 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:06:38.314889  378582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:06:38.331592  378582 system_svc.go:56] duration metric: took 16.752837ms WaitForService to wait for kubelet
	I1227 09:06:38.331623  378582 kubeadm.go:587] duration metric: took 14.718423069s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:06:38.331648  378582 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:06:38.334780  378582 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:06:38.334824  378582 node_conditions.go:123] node cpu capacity is 8
	I1227 09:06:38.334856  378582 node_conditions.go:105] duration metric: took 3.199973ms to run NodePressure ...
	I1227 09:06:38.334872  378582 start.go:242] waiting for startup goroutines ...
	I1227 09:06:38.344762  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:38.403271  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:38.739299  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:38.739409  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:38.844101  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:38.903340  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:39.240100  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:39.240150  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:39.344970  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:39.403474  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:39.738868  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:39.738952  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:39.844589  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:39.903379  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:40.240343  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:40.240462  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:40.344609  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:40.403305  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:40.739533  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:40.739594  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:40.844504  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:40.903228  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:41.239564  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:41.239599  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:41.344461  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:41.402492  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:41.738454  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:41.738613  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:41.844097  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:41.903093  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:42.239403  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:42.239478  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:42.344189  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:42.403460  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:42.739659  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:42.739753  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:42.844592  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:42.902906  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:43.239271  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:43.239312  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:43.344477  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:43.402929  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:43.739564  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:43.739651  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:43.845155  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:43.903966  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:44.239504  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:44.239822  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:44.345189  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:44.403756  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:44.738758  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:44.738808  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:44.844510  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:44.902486  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:45.238708  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:45.238724  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:45.345293  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:45.403885  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:45.739745  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:45.739811  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:45.844874  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:45.903518  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:46.239152  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:46.239201  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:46.345236  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:46.403893  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:46.739463  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:46.739652  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:46.845613  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:46.903313  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:47.335889  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:47.335935  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:47.344375  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:47.436442  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:47.739125  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:47.739160  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:47.845272  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:47.903217  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:48.239584  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:48.239601  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:48.344625  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:48.402725  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:48.738775  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:48.738951  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:48.844825  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:48.902864  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:49.238969  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:49.239008  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:49.345247  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:49.403919  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:49.740740  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:49.741032  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:49.847057  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:49.904636  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:50.239126  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:50.239470  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:50.344956  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:50.403707  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:50.739065  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:50.739156  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:50.845363  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:50.903171  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:51.322173  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:51.322503  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:51.438350  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:51.438450  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:51.739300  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:51.739431  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:51.845301  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:51.903717  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:52.239069  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:52.239091  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:52.345101  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:52.404193  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:52.739980  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:52.740267  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:52.897520  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:52.920456  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:53.238929  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:53.239072  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:53.344875  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:53.440712  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:53.739224  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:53.739309  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:53.845063  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:53.904705  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:54.239158  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:54.239191  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:54.345508  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:54.403476  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:54.739187  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:54.739250  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:54.844534  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:54.903368  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:55.239919  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:55.239957  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:55.344998  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:55.403028  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:55.739971  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:55.740037  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:55.844944  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:55.903476  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:56.238973  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:56.238989  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:56.344749  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:56.402830  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:56.738952  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:56.739058  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:56.844324  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:56.902277  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:57.239158  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:57.239190  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:57.344669  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:57.402447  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:57.739628  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:57.739696  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:57.844775  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:57.903177  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:58.239460  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:58.239659  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:58.344657  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:58.445025  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:58.739294  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:58.739352  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:58.844130  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:58.903387  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:59.248262  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:59.248452  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:59.343877  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:59.402833  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:06:59.739198  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:06:59.739276  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:06:59.844591  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:06:59.902469  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:00.238168  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:00.238221  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:00.345188  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:00.403572  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:00.738901  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:00.738936  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:00.844758  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:00.904509  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:01.239006  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:01.240366  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:01.346039  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:01.404229  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:01.740323  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:01.740599  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:01.845037  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:01.902990  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:02.239397  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:02.239485  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:02.344277  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:02.403737  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:02.739256  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:02.739317  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:02.844458  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:02.903397  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:03.238416  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:03.238490  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:03.344808  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:03.402192  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:03.739592  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:03.739635  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:03.843855  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:03.902781  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:04.239213  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:04.239393  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:04.345367  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:04.402921  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:04.739263  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:04.739463  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:04.844228  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:04.903593  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:05.238493  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:05.238542  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:05.343852  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:05.402886  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:05.739445  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:05.739479  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:05.844023  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:05.903185  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:06.239389  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:06.239431  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:06.344283  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:06.402557  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:06.738837  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:06.738866  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:06.845256  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:06.904092  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:07.239468  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:07.239622  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:07.344904  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:07.444849  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:07.739102  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:07:07.739174  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:07.844953  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:07.903231  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:08.239889  378582 kapi.go:107] duration metric: took 43.004298955s to wait for kubernetes.io/minikube-addons=registry ...
	I1227 09:07:08.240113  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:08.345489  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:08.403630  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:08.739113  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:08.845959  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:08.903394  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:09.239827  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:09.345130  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:09.404090  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:09.739873  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:09.844460  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:09.902892  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:10.239546  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:10.344569  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:10.444677  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:10.739750  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:10.847530  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:10.903375  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:11.239859  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:11.344948  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:11.403630  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:11.739369  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:11.844132  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:11.903995  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:12.239024  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:12.345480  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:12.403403  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:12.739017  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:12.844754  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:12.903855  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:13.238781  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:13.345106  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:13.403741  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:13.739124  378582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:07:13.844582  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:13.902833  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:14.238360  378582 kapi.go:107] duration metric: took 49.002774169s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1227 09:07:14.344773  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:14.403340  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:14.844937  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:14.902994  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:15.344456  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:15.402447  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:15.845114  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:15.903273  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:16.345926  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:16.403849  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:16.844203  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:16.903934  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:17.344905  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:17.444975  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:17.844705  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:17.903700  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:07:18.345780  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:18.403865  378582 kapi.go:107] duration metric: took 46.503732497s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1227 09:07:18.405702  378582 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-102660 cluster.
	I1227 09:07:18.406860  378582 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1227 09:07:18.407866  378582 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1227 09:07:18.844520  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:19.345492  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:19.845726  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:20.344973  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:20.844638  378582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:07:21.344675  378582 kapi.go:107] duration metric: took 55.503529103s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1227 09:07:21.345845  378582 out.go:179] * Enabled addons: inspektor-gadget, nvidia-device-plugin, storage-provisioner, ingress-dns, registry-creds, cloud-spanner, metrics-server, amd-gpu-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1227 09:07:21.347057  378582 addons.go:530] duration metric: took 57.733826895s for enable addons: enabled=[inspektor-gadget nvidia-device-plugin storage-provisioner ingress-dns registry-creds cloud-spanner metrics-server amd-gpu-device-plugin yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1227 09:07:21.347112  378582 start.go:247] waiting for cluster config update ...
	I1227 09:07:21.347133  378582 start.go:256] writing updated cluster config ...
	I1227 09:07:21.347411  378582 ssh_runner.go:195] Run: rm -f paused
	I1227 09:07:21.351622  378582 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:07:21.354345  378582 pod_ready.go:83] waiting for pod "coredns-7d764666f9-n79db" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:21.357935  378582 pod_ready.go:94] pod "coredns-7d764666f9-n79db" is "Ready"
	I1227 09:07:21.357954  378582 pod_ready.go:86] duration metric: took 3.587392ms for pod "coredns-7d764666f9-n79db" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:21.359630  378582 pod_ready.go:83] waiting for pod "etcd-addons-102660" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:21.362952  378582 pod_ready.go:94] pod "etcd-addons-102660" is "Ready"
	I1227 09:07:21.362971  378582 pod_ready.go:86] duration metric: took 3.324724ms for pod "etcd-addons-102660" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:21.364654  378582 pod_ready.go:83] waiting for pod "kube-apiserver-addons-102660" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:21.369783  378582 pod_ready.go:94] pod "kube-apiserver-addons-102660" is "Ready"
	I1227 09:07:21.369821  378582 pod_ready.go:86] duration metric: took 5.147865ms for pod "kube-apiserver-addons-102660" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:21.371466  378582 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-102660" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:21.754829  378582 pod_ready.go:94] pod "kube-controller-manager-addons-102660" is "Ready"
	I1227 09:07:21.754857  378582 pod_ready.go:86] duration metric: took 383.373398ms for pod "kube-controller-manager-addons-102660" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:21.956040  378582 pod_ready.go:83] waiting for pod "kube-proxy-8mwfh" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:22.355900  378582 pod_ready.go:94] pod "kube-proxy-8mwfh" is "Ready"
	I1227 09:07:22.355929  378582 pod_ready.go:86] duration metric: took 399.862368ms for pod "kube-proxy-8mwfh" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:22.555880  378582 pod_ready.go:83] waiting for pod "kube-scheduler-addons-102660" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:22.955111  378582 pod_ready.go:94] pod "kube-scheduler-addons-102660" is "Ready"
	I1227 09:07:22.955143  378582 pod_ready.go:86] duration metric: took 399.236074ms for pod "kube-scheduler-addons-102660" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:07:22.955156  378582 pod_ready.go:40] duration metric: took 1.603501379s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:07:22.998098  378582 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:07:22.999454  378582 out.go:179] * Done! kubectl is now configured to use "addons-102660" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 09:07:23 addons-102660 crio[773]: time="2025-12-27T09:07:23.805524549Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6870c4e4-640b-4451-a825-6d7f74c9cad6 name=/runtime.v1.ImageService/PullImage
	Dec 27 09:07:23 addons-102660 crio[773]: time="2025-12-27T09:07:23.806901384Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 09:07:25 addons-102660 crio[773]: time="2025-12-27T09:07:25.643173629Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6870c4e4-640b-4451-a825-6d7f74c9cad6 name=/runtime.v1.ImageService/PullImage
	Dec 27 09:07:25 addons-102660 crio[773]: time="2025-12-27T09:07:25.643837179Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0610d05b-9494-4eb9-a3be-25a40cfe6db0 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:07:25 addons-102660 crio[773]: time="2025-12-27T09:07:25.645634694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ec73ecc7-6f05-4478-a1d7-59429a799420 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:07:25 addons-102660 crio[773]: time="2025-12-27T09:07:25.648712667Z" level=info msg="Creating container: default/busybox/busybox" id=381bd073-6370-4a0c-969b-f7aad0a667e6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:07:25 addons-102660 crio[773]: time="2025-12-27T09:07:25.648839925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:07:25 addons-102660 crio[773]: time="2025-12-27T09:07:25.65440884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:07:25 addons-102660 crio[773]: time="2025-12-27T09:07:25.654868417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:07:25 addons-102660 crio[773]: time="2025-12-27T09:07:25.692638659Z" level=info msg="Created container 19164d8f0e31212f898500165e4864973db467f0e0cd65000892213fe354ce48: default/busybox/busybox" id=381bd073-6370-4a0c-969b-f7aad0a667e6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:07:25 addons-102660 crio[773]: time="2025-12-27T09:07:25.693192103Z" level=info msg="Starting container: 19164d8f0e31212f898500165e4864973db467f0e0cd65000892213fe354ce48" id=6320b229-c33a-4a30-94e1-6de14b0387a3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:07:25 addons-102660 crio[773]: time="2025-12-27T09:07:25.694847868Z" level=info msg="Started container" PID=6293 containerID=19164d8f0e31212f898500165e4864973db467f0e0cd65000892213fe354ce48 description=default/busybox/busybox id=6320b229-c33a-4a30-94e1-6de14b0387a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=df46b539561739873c7b3a86226f6fbedfd2b6d4a5f7074fb8d7845a377d4bf4
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.586600315Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a/POD" id=48916034-caa6-4a0a-aa78-79ded0a4e5c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.586668036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.592949592Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a Namespace:local-path-storage ID:168b456354ef0bc4ffe924e6ace023ec40f59f7cd68182610eec8459f62d664a UID:b287accc-1a5a-4e7a-9e08-f2c6c5cd75aa NetNS:/var/run/netns/bdd97358-8254-49cd-9294-8db2d834a330 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009f8ed8}] Aliases:map[]}"
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.59298055Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a to CNI network \"kindnet\" (type=ptp)"
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.602802179Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a Namespace:local-path-storage ID:168b456354ef0bc4ffe924e6ace023ec40f59f7cd68182610eec8459f62d664a UID:b287accc-1a5a-4e7a-9e08-f2c6c5cd75aa NetNS:/var/run/netns/bdd97358-8254-49cd-9294-8db2d834a330 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009f8ed8}] Aliases:map[]}"
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.602945467Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a for CNI network kindnet (type=ptp)"
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.603786906Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.604638993Z" level=info msg="Ran pod sandbox 168b456354ef0bc4ffe924e6ace023ec40f59f7cd68182610eec8459f62d664a with infra container: local-path-storage/helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a/POD" id=48916034-caa6-4a0a-aa78-79ded0a4e5c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.605927613Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=43fb6e2d-9e02-47e6-aab5-017affc9a9ad name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.606122109Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=43fb6e2d-9e02-47e6-aab5-017affc9a9ad name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.606191865Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=43fb6e2d-9e02-47e6-aab5-017affc9a9ad name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.606964965Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=2d0e9eb5-2cd1-49f0-a455-93be38d2982a name=/runtime.v1.ImageService/PullImage
	Dec 27 09:07:33 addons-102660 crio[773]: time="2025-12-27T09:07:33.611213655Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	19164d8f0e312       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   df46b53956173       busybox                                     default
	61cbe8e837adc       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          13 seconds ago       Running             csi-snapshotter                          0                   d61465b12236f       csi-hostpathplugin-fql4d                    kube-system
	5ce019c3bd795       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          14 seconds ago       Running             csi-provisioner                          0                   d61465b12236f       csi-hostpathplugin-fql4d                    kube-system
	bc00ef2c40687       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 seconds ago       Running             liveness-probe                           0                   d61465b12236f       csi-hostpathplugin-fql4d                    kube-system
	b84f056d766ce       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   d61465b12236f       csi-hostpathplugin-fql4d                    kube-system
	ec60bd008ef3b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 16 seconds ago       Running             gcp-auth                                 0                   aae9e66be37f4       gcp-auth-5bbcf684b5-tdck9                   gcp-auth
	c811e10f8822a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                19 seconds ago       Running             node-driver-registrar                    0                   d61465b12236f       csi-hostpathplugin-fql4d                    kube-system
	cb1dce345d2de       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             20 seconds ago       Running             controller                               0                   dd0330c178fde       ingress-nginx-controller-7847b5c79c-5sxqr   ingress-nginx
	59602e9b4861f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            23 seconds ago       Running             gadget                                   0                   e53219a56efae       gadget-mqf57                                gadget
	a228318485835       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              26 seconds ago       Running             registry-proxy                           0                   79d8e3658c3d9       registry-proxy-m2g76                        kube-system
	94a18ba09d75a       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     29 seconds ago       Running             amd-gpu-device-plugin                    0                   225d1269b648c       amd-gpu-device-plugin-77gfj                 kube-system
	97eda9aefcf0d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   31 seconds ago       Running             csi-external-health-monitor-controller   0                   d61465b12236f       csi-hostpathplugin-fql4d                    kube-system
	29b050443a0d0       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     32 seconds ago       Running             nvidia-device-plugin-ctr                 0                   8e23ad30fa980       nvidia-device-plugin-daemonset-4jxql        kube-system
	9fbd3357952f0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   35 seconds ago       Exited              patch                                    0                   530173be518d0       ingress-nginx-admission-patch-frcft         ingress-nginx
	2fdf7f14509c2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   36 seconds ago       Exited              patch                                    0                   584b996a2ac11       gcp-auth-certs-patch-jdqnc                  gcp-auth
	759eef689eba0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   36 seconds ago       Exited              create                                   0                   a1c0a769d48ab       gcp-auth-certs-create-js9cv                 gcp-auth
	752ba21e5d54a       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             36 seconds ago       Running             local-path-provisioner                   0                   73bf408917b8c       local-path-provisioner-c44bcd496-kdgkt      local-path-storage
	163a0aaa6a06f       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             37 seconds ago       Running             csi-attacher                             0                   de9c9f5db6169       csi-hostpath-attacher-0                     kube-system
	8c32e03d7b7ea       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      38 seconds ago       Running             volume-snapshot-controller               0                   1d926143082d0       snapshot-controller-6588d87457-52l8t        kube-system
	95be8942fae68       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              39 seconds ago       Running             csi-resizer                              0                   b5ae0f38351b8       csi-hostpath-resizer-0                      kube-system
	0100055c8dfbc       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      39 seconds ago       Running             volume-snapshot-controller               0                   c98af95d012d3       snapshot-controller-6588d87457-n2hhv        kube-system
	94c6915787e83       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               40 seconds ago       Running             minikube-ingress-dns                     0                   78a0226b7a2cb       kube-ingress-dns-minikube                   kube-system
	42bbe005fc734       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   46 seconds ago       Exited              create                                   0                   f7d4d57f59568       ingress-nginx-admission-create-llq44        ingress-nginx
	1413f858328d1       ghcr.io/manusa/yakd@sha256:45d2fe163841511e351ae36a5e434fb854a886b0d6a70cea692bd707543fd8c6                                                  46 seconds ago       Running             yakd                                     0                   cf7eb36addef0       yakd-dashboard-7bcf5795cd-shhhd             yakd-dashboard
	304c92f5a5c84       gcr.io/cloud-spanner-emulator/emulator@sha256:b948b04b45496ebeb13eee27bc9d238593c142e8e010443892153f181591abde                               49 seconds ago       Running             cloud-spanner-emulator                   0                   5ffb2d0ad45d6       cloud-spanner-emulator-5649ccbc87-kr2js     default
	e5e28883bcce1       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           53 seconds ago       Running             registry                                 0                   4a67760a06517       registry-788cd7d5bc-k2xwn                   kube-system
	f8332e2291843       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        55 seconds ago       Running             metrics-server                           0                   612e211b0bff8       metrics-server-5778bb4788-qh2dz             kube-system
	a7360b2982e80       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                                             56 seconds ago       Running             coredns                                  0                   eaeca24bd11a2       coredns-7d764666f9-n79db                    kube-system
	4901f7247dd76       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             56 seconds ago       Running             storage-provisioner                      0                   709db9327dab6       storage-provisioner                         kube-system
	63d37d0b224df       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           About a minute ago   Running             kindnet-cni                              0                   ca941aaf79364       kindnet-cvqk2                               kube-system
	8dc04e3833a23       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                                                             About a minute ago   Running             kube-proxy                               0                   f3087cfe564f5       kube-proxy-8mwfh                            kube-system
	9e1905ef463d3       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                                             About a minute ago   Running             etcd                                     0                   e9147fb041de9       etcd-addons-102660                          kube-system
	8c753ac1232bc       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                                                             About a minute ago   Running             kube-scheduler                           0                   8282b7c8f8ef0       kube-scheduler-addons-102660                kube-system
	3ea9a1cdacfc9       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                                                             About a minute ago   Running             kube-apiserver                           0                   aa9af37627dac       kube-apiserver-addons-102660                kube-system
	16c4fbdade2e6       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                                                             About a minute ago   Running             kube-controller-manager                  0                   1c3ddd0108efa       kube-controller-manager-addons-102660       kube-system
	
	
	==> coredns [a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587] <==
	[INFO] 10.244.0.16:41693 - 57986 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000155084s
	[INFO] 10.244.0.16:60775 - 12608 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069789s
	[INFO] 10.244.0.16:60775 - 12272 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097732s
	[INFO] 10.244.0.16:43468 - 8868 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000083357s
	[INFO] 10.244.0.16:43468 - 8515 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000122874s
	[INFO] 10.244.0.16:55514 - 27502 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000036036s
	[INFO] 10.244.0.16:55514 - 27219 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000050871s
	[INFO] 10.244.0.16:59236 - 30602 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000062894s
	[INFO] 10.244.0.16:59236 - 30837 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000096107s
	[INFO] 10.244.0.16:35484 - 20635 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091773s
	[INFO] 10.244.0.16:35484 - 20424 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012254s
	[INFO] 10.244.0.21:39907 - 44273 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184852s
	[INFO] 10.244.0.21:53381 - 19623 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000233829s
	[INFO] 10.244.0.21:39840 - 63331 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012023s
	[INFO] 10.244.0.21:41644 - 6986 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000174679s
	[INFO] 10.244.0.21:56720 - 46461 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109618s
	[INFO] 10.244.0.21:38769 - 44138 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000154741s
	[INFO] 10.244.0.21:56267 - 6125 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004454278s
	[INFO] 10.244.0.21:57927 - 19595 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004922647s
	[INFO] 10.244.0.21:37294 - 56003 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003909764s
	[INFO] 10.244.0.21:60187 - 12562 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004088349s
	[INFO] 10.244.0.21:41284 - 23230 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003826215s
	[INFO] 10.244.0.21:47387 - 22621 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003842083s
	[INFO] 10.244.0.21:46164 - 33216 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000837613s
	[INFO] 10.244.0.21:48303 - 10058 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000957324s
	
	
	==> describe nodes <==
	Name:               addons-102660
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-102660
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=addons-102660
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_06_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-102660
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-102660"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:06:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-102660
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:07:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:07:19 +0000   Sat, 27 Dec 2025 09:06:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:07:19 +0000   Sat, 27 Dec 2025 09:06:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:07:19 +0000   Sat, 27 Dec 2025 09:06:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:07:19 +0000   Sat, 27 Dec 2025 09:06:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-102660
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                bf2f84c8-ee27-4bf7-af7c-94fcaa08822e
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5649ccbc87-kr2js                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  gadget                      gadget-mqf57                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  gcp-auth                    gcp-auth-5bbcf684b5-tdck9                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-5sxqr                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         69s
	  kube-system                 amd-gpu-device-plugin-77gfj                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 coredns-7d764666f9-n79db                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     71s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 csi-hostpathplugin-fql4d                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 etcd-addons-102660                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         76s
	  kube-system                 kindnet-cvqk2                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      71s
	  kube-system                 kube-apiserver-addons-102660                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-controller-manager-addons-102660                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-proxy-8mwfh                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-scheduler-addons-102660                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 metrics-server-5778bb4788-qh2dz                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         70s
	  kube-system                 nvidia-device-plugin-daemonset-4jxql                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 registry-788cd7d5bc-k2xwn                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 registry-creds-567fb78d95-42kx7                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 registry-proxy-m2g76                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 snapshot-controller-6588d87457-52l8t                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 snapshot-controller-6588d87457-n2hhv                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  local-path-storage          helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-c44bcd496-kdgkt                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  yakd-dashboard              yakd-dashboard-7bcf5795cd-shhhd                               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     69s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  72s   node-controller  Node addons-102660 event: Registered Node addons-102660 in Controller
	
	
	==> dmesg <==
	[  +5.107432] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5] <==
	{"level":"info","ts":"2025-12-27T09:06:14.821991Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:06:14.822602Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-27T09:06:14.822619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:06:14.822641Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-12-27T09:06:14.822651Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-27T09:06:14.823197Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:06:14.823689Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:06:14.823688Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-102660 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:06:14.823730Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:06:14.823922Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:06:14.823952Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:06:14.823938Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:06:14.824138Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:06:14.824207Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:06:14.824293Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T09:06:14.824409Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T09:06:14.824946Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:06:14.825055Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:06:14.827478Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-12-27T09:06:14.827569Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:06:51.320628Z","caller":"traceutil/trace.go:172","msg":"trace[152554505] transaction","detail":"{read_only:false; response_revision:998; number_of_response:1; }","duration":"175.898499ms","start":"2025-12-27T09:06:51.144712Z","end":"2025-12-27T09:06:51.320611Z","steps":["trace[152554505] 'process raft request'  (duration: 175.763691ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:06:51.436403Z","caller":"traceutil/trace.go:172","msg":"trace[1232388707] transaction","detail":"{read_only:false; response_revision:999; number_of_response:1; }","duration":"110.003283ms","start":"2025-12-27T09:06:51.326381Z","end":"2025-12-27T09:06:51.436384Z","steps":["trace[1232388707] 'process raft request'  (duration: 109.859541ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:06:59.125291Z","caller":"traceutil/trace.go:172","msg":"trace[1669204030] transaction","detail":"{read_only:false; response_revision:1059; number_of_response:1; }","duration":"137.984467ms","start":"2025-12-27T09:06:58.987287Z","end":"2025-12-27T09:06:59.125272Z","steps":["trace[1669204030] 'process raft request'  (duration: 137.830916ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:06:59.158011Z","caller":"traceutil/trace.go:172","msg":"trace[1210941837] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"124.808845ms","start":"2025-12-27T09:06:59.033187Z","end":"2025-12-27T09:06:59.157996Z","steps":["trace[1210941837] 'process raft request'  (duration: 124.710481ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:06:59.246667Z","caller":"traceutil/trace.go:172","msg":"trace[537536298] transaction","detail":"{read_only:false; response_revision:1061; number_of_response:1; }","duration":"114.049114ms","start":"2025-12-27T09:06:59.132600Z","end":"2025-12-27T09:06:59.246649Z","steps":["trace[537536298] 'process raft request'  (duration: 97.766237ms)","trace[537536298] 'compare'  (duration: 16.091067ms)"],"step_count":2}
	
	
	==> gcp-auth [ec60bd008ef3b8fbed026c80770a62c3fab883c88f54e6c3fbddea874b28124a] <==
	2025/12/27 09:07:17 GCP Auth Webhook started!
	2025/12/27 09:07:23 Ready to marshal response ...
	2025/12/27 09:07:23 Ready to write response ...
	2025/12/27 09:07:23 Ready to marshal response ...
	2025/12/27 09:07:23 Ready to write response ...
	2025/12/27 09:07:23 Ready to marshal response ...
	2025/12/27 09:07:23 Ready to write response ...
	2025/12/27 09:07:33 Ready to marshal response ...
	2025/12/27 09:07:33 Ready to write response ...
	2025/12/27 09:07:33 Ready to marshal response ...
	2025/12/27 09:07:33 Ready to write response ...
	
	
	==> kernel <==
	 09:07:34 up 49 min,  0 user,  load average: 1.52, 2.32, 2.18
	Linux addons-102660 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302] <==
	I1227 09:06:26.583649       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1227 09:06:26.583822       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:06:26.583844       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:06:26.583863       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:06:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:06:26.756579       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:06:26.756607       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:06:26.756615       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:06:26.757170       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:06:27.056972       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:06:27.056999       1 metrics.go:72] Registering metrics
	I1227 09:06:27.057062       1 controller.go:711] "Syncing nftables rules"
	I1227 09:06:36.759928       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:06:36.759993       1 main.go:301] handling current node
	I1227 09:06:46.756491       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:06:46.756539       1 main.go:301] handling current node
	I1227 09:06:56.756617       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:06:56.756685       1 main.go:301] handling current node
	I1227 09:07:06.756881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:07:06.756940       1 main.go:301] handling current node
	I1227 09:07:16.756612       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:07:16.756655       1 main.go:301] handling current node
	I1227 09:07:26.756461       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:07:26.756497       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b] <==
	W1227 09:06:26.340274       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1227 09:06:31.843076       1 alloc.go:329] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.101.134.108"}
	W1227 09:06:36.942187       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.134.108:443: connect: connection refused
	E1227 09:06:36.942774       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.134.108:443: connect: connection refused" logger="UnhandledError"
	W1227 09:06:36.942347       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.134.108:443: connect: connection refused
	E1227 09:06:36.943029       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.134.108:443: connect: connection refused" logger="UnhandledError"
	W1227 09:06:36.962822       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.134.108:443: connect: connection refused
	E1227 09:06:36.962862       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.134.108:443: connect: connection refused" logger="UnhandledError"
	W1227 09:06:36.969589       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.134.108:443: connect: connection refused
	E1227 09:06:36.969778       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.134.108:443: connect: connection refused" logger="UnhandledError"
	E1227 09:06:40.094352       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.106.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.106.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.106.184:443: connect: connection refused" logger="UnhandledError"
	W1227 09:06:40.095060       1 handler_proxy.go:99] no RequestInfo found in the context
	E1227 09:06:40.095162       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1227 09:06:40.095309       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.106.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.106.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.106.184:443: connect: connection refused" logger="UnhandledError"
	E1227 09:06:40.100536       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.106.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.106.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.106.184:443: connect: connection refused" logger="UnhandledError"
	I1227 09:06:40.154242       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1227 09:06:52.497426       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 09:06:52.505781       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 09:06:52.601785       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:06:52.610138       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1227 09:07:32.624064       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33460: use of closed network connection
	E1227 09:07:32.765030       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33488: use of closed network connection
	
	
	==> kube-controller-manager [16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4] <==
	I1227 09:06:22.485489       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.485138       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.485198       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.485906       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.486043       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.486091       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.486119       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.486160       1 range_allocator.go:177] "Sending events to api server"
	I1227 09:06:22.486276       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 09:06:22.486287       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:06:22.486293       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.486098       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.486215       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.486108       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.493702       1 range_allocator.go:433] "Set node PodCIDR" node="addons-102660" podCIDRs=["10.244.0.0/24"]
	I1227 09:06:22.582078       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:22.582099       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:06:22.582105       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:06:22.584203       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:37.479888       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1227 09:06:52.489967       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1227 09:06:52.490041       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:06:52.590190       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:52.594597       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:06:52.694880       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039] <==
	I1227 09:06:24.260902       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:06:24.605642       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:06:24.708887       1 shared_informer.go:377] "Caches are synced"
	I1227 09:06:24.715464       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 09:06:24.720929       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:06:24.805339       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:06:24.805485       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:06:24.836410       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:06:24.836966       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:06:24.837003       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:06:24.850144       1 config.go:200] "Starting service config controller"
	I1227 09:06:24.852020       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:06:24.851607       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:06:24.852555       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:06:24.851658       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:06:24.852649       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:06:24.852456       1 config.go:309] "Starting node config controller"
	I1227 09:06:24.852774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:06:24.852828       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:06:24.954239       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:06:24.954284       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:06:24.954327       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b] <==
	E1227 09:06:15.706508       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:06:15.706545       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:06:15.706574       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:06:15.706574       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:06:15.706607       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:06:15.706754       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:06:15.706869       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:06:15.706941       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:06:15.706975       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:06:15.706981       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:06:15.707112       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:06:15.707360       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:06:15.707494       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:06:15.707625       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:06:16.538720       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1227 09:06:16.564814       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:06:16.657074       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:06:16.673970       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:06:16.685860       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:06:16.745450       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:06:16.771235       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:06:16.834436       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:06:16.878428       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:06:16.878560       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1227 09:06:19.600333       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:07:11 addons-102660 kubelet[1273]: I1227 09:07:11.218189    1273 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="gadget/gadget-mqf57" podStartSLOduration=17.789221788 podStartE2EDuration="47.21817011s" podCreationTimestamp="2025-12-27 09:06:24 +0000 UTC" firstStartedPulling="2025-12-27 09:06:40.909177866 +0000 UTC m=+23.012031643" lastFinishedPulling="2025-12-27 09:07:10.338126197 +0000 UTC m=+52.440979965" observedRunningTime="2025-12-27 09:07:11.217293378 +0000 UTC m=+53.320147173" watchObservedRunningTime="2025-12-27 09:07:11.21817011 +0000 UTC m=+53.321023895"
	Dec 27 09:07:12 addons-102660 kubelet[1273]: E1227 09:07:12.208099    1273 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-mqf57" containerName="gadget"
	Dec 27 09:07:14 addons-102660 kubelet[1273]: E1227 09:07:14.216473    1273 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-5sxqr" containerName="controller"
	Dec 27 09:07:14 addons-102660 kubelet[1273]: I1227 09:07:14.228860    1273 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-5sxqr" podStartSLOduration=28.903182813 podStartE2EDuration="49.228838686s" podCreationTimestamp="2025-12-27 09:06:25 +0000 UTC" firstStartedPulling="2025-12-27 09:06:53.351665448 +0000 UTC m=+35.454519226" lastFinishedPulling="2025-12-27 09:07:13.677321323 +0000 UTC m=+55.780175099" observedRunningTime="2025-12-27 09:07:14.227398412 +0000 UTC m=+56.330252197" watchObservedRunningTime="2025-12-27 09:07:14.228838686 +0000 UTC m=+56.331692471"
	Dec 27 09:07:15 addons-102660 kubelet[1273]: E1227 09:07:15.221018    1273 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-5sxqr" containerName="controller"
	Dec 27 09:07:15 addons-102660 kubelet[1273]: E1227 09:07:15.888524    1273 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-mqf57" containerName="gadget"
	Dec 27 09:07:16 addons-102660 kubelet[1273]: E1227 09:07:16.225257    1273 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-mqf57" containerName="gadget"
	Dec 27 09:07:17 addons-102660 kubelet[1273]: E1227 09:07:17.228278    1273 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-mqf57" containerName="gadget"
	Dec 27 09:07:18 addons-102660 kubelet[1273]: I1227 09:07:18.244956    1273 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="gcp-auth/gcp-auth-5bbcf684b5-tdck9" podStartSLOduration=40.277521441 podStartE2EDuration="47.244937239s" podCreationTimestamp="2025-12-27 09:06:31 +0000 UTC" firstStartedPulling="2025-12-27 09:07:10.324912787 +0000 UTC m=+52.427766565" lastFinishedPulling="2025-12-27 09:07:17.292328599 +0000 UTC m=+59.395182363" observedRunningTime="2025-12-27 09:07:18.244543714 +0000 UTC m=+60.347397500" watchObservedRunningTime="2025-12-27 09:07:18.244937239 +0000 UTC m=+60.347791025"
	Dec 27 09:07:19 addons-102660 kubelet[1273]: I1227 09:07:19.023912    1273 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 27 09:07:19 addons-102660 kubelet[1273]: I1227 09:07:19.023960    1273 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 27 09:07:21 addons-102660 kubelet[1273]: E1227 09:07:21.259547    1273 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-fql4d" containerName="hostpath"
	Dec 27 09:07:21 addons-102660 kubelet[1273]: I1227 09:07:21.272805    1273 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-fql4d" podStartSLOduration=2.311655137 podStartE2EDuration="45.272770086s" podCreationTimestamp="2025-12-27 09:06:36 +0000 UTC" firstStartedPulling="2025-12-27 09:06:37.389240651 +0000 UTC m=+19.492094421" lastFinishedPulling="2025-12-27 09:07:20.350355593 +0000 UTC m=+62.453209370" observedRunningTime="2025-12-27 09:07:21.272076267 +0000 UTC m=+63.374930054" watchObservedRunningTime="2025-12-27 09:07:21.272770086 +0000 UTC m=+63.375623872"
	Dec 27 09:07:22 addons-102660 kubelet[1273]: E1227 09:07:22.263754    1273 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-fql4d" containerName="hostpath"
	Dec 27 09:07:23 addons-102660 kubelet[1273]: I1227 09:07:23.577577    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5mt7\" (UniqueName: \"kubernetes.io/projected/838dc247-ecef-415f-b13c-bef23dd579e8-kube-api-access-m5mt7\") pod \"busybox\" (UID: \"838dc247-ecef-415f-b13c-bef23dd579e8\") " pod="default/busybox"
	Dec 27 09:07:23 addons-102660 kubelet[1273]: I1227 09:07:23.577663    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/838dc247-ecef-415f-b13c-bef23dd579e8-gcp-creds\") pod \"busybox\" (UID: \"838dc247-ecef-415f-b13c-bef23dd579e8\") " pod="default/busybox"
	Dec 27 09:07:25 addons-102660 kubelet[1273]: E1227 09:07:25.223641    1273 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-5sxqr" containerName="controller"
	Dec 27 09:07:26 addons-102660 kubelet[1273]: I1227 09:07:26.293316    1273 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.453727246 podStartE2EDuration="3.293295178s" podCreationTimestamp="2025-12-27 09:07:23 +0000 UTC" firstStartedPulling="2025-12-27 09:07:23.805213134 +0000 UTC m=+65.908066914" lastFinishedPulling="2025-12-27 09:07:25.644781065 +0000 UTC m=+67.747634846" observedRunningTime="2025-12-27 09:07:26.292025076 +0000 UTC m=+68.394878862" watchObservedRunningTime="2025-12-27 09:07:26.293295178 +0000 UTC m=+68.396148965"
	Dec 27 09:07:31 addons-102660 kubelet[1273]: I1227 09:07:31.978659    1273 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="52ddad63-3ae8-4d87-a758-78b56ad53437" path="/var/lib/kubelet/pods/52ddad63-3ae8-4d87-a758-78b56ad53437/volumes"
	Dec 27 09:07:31 addons-102660 kubelet[1273]: I1227 09:07:31.979068    1273 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9f7ec710-5179-42ef-8ba9-bae0799d7a31" path="/var/lib/kubelet/pods/9f7ec710-5179-42ef-8ba9-bae0799d7a31/volumes"
	Dec 27 09:07:32 addons-102660 kubelet[1273]: E1227 09:07:32.623953    1273 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35546->127.0.0.1:34541: write tcp 127.0.0.1:35546->127.0.0.1:34541: write: broken pipe
	Dec 27 09:07:33 addons-102660 kubelet[1273]: I1227 09:07:33.347333    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b287accc-1a5a-4e7a-9e08-f2c6c5cd75aa-data\") pod \"helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a\" (UID: \"b287accc-1a5a-4e7a-9e08-f2c6c5cd75aa\") " pod="local-path-storage/helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a"
	Dec 27 09:07:33 addons-102660 kubelet[1273]: I1227 09:07:33.347397    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b287accc-1a5a-4e7a-9e08-f2c6c5cd75aa-gcp-creds\") pod \"helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a\" (UID: \"b287accc-1a5a-4e7a-9e08-f2c6c5cd75aa\") " pod="local-path-storage/helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a"
	Dec 27 09:07:33 addons-102660 kubelet[1273]: I1227 09:07:33.347485    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc896\" (UniqueName: \"kubernetes.io/projected/b287accc-1a5a-4e7a-9e08-f2c6c5cd75aa-kube-api-access-pc896\") pod \"helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a\" (UID: \"b287accc-1a5a-4e7a-9e08-f2c6c5cd75aa\") " pod="local-path-storage/helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a"
	Dec 27 09:07:33 addons-102660 kubelet[1273]: I1227 09:07:33.347649    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b287accc-1a5a-4e7a-9e08-f2c6c5cd75aa-script\") pod \"helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a\" (UID: \"b287accc-1a5a-4e7a-9e08-f2c6c5cd75aa\") " pod="local-path-storage/helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a"
	
	
	==> storage-provisioner [4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849] <==
	W1227 09:07:09.591849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:11.594971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:11.599760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:13.603051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:13.606554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:15.609586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:15.613072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:17.617551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:17.621873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:19.624479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:19.628590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:21.631120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:21.634726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:23.637090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:23.641070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:25.644217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:25.648270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:27.651551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:27.654982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:29.658260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:29.661416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:31.663886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:31.668534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:33.671504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:07:33.676042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-102660 -n addons-102660
helpers_test.go:270: (dbg) Run:  kubectl --context addons-102660 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: test-local-path ingress-nginx-admission-create-llq44 ingress-nginx-admission-patch-frcft registry-creds-567fb78d95-42kx7 helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-102660 describe pod test-local-path ingress-nginx-admission-create-llq44 ingress-nginx-admission-patch-frcft registry-creds-567fb78d95-42kx7 helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-102660 describe pod test-local-path ingress-nginx-admission-create-llq44 ingress-nginx-admission-patch-frcft registry-creds-567fb78d95-42kx7 helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a: exit status 1 (65.393378ms)

                                                
                                                
-- stdout --
	Name:               test-local-path
	Namespace:          default
	Priority:           0
	Service Account:    default
	Node:               <none>
	Labels:             run=test-local-path
	Annotations:        <none>
	Status:             Pending
	IP:                 
	IPs:                <none>
	NominatedNodeName:  addons-102660
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fc5pk (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-fc5pk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-llq44" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-frcft" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-42kx7" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-102660 describe pod test-local-path ingress-nginx-admission-create-llq44 ingress-nginx-admission-patch-frcft registry-creds-567fb78d95-42kx7 helper-pod-create-pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable headlamp --alsologtostderr -v=1: exit status 11 (244.367269ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:35.309730  387609 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:35.309857  387609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:35.309868  387609 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:35.309873  387609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:35.310088  387609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:35.310371  387609 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:35.310680  387609 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:35.310700  387609 addons.go:622] checking whether the cluster is paused
	I1227 09:07:35.310783  387609 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:35.310814  387609 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:35.311227  387609 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:35.331774  387609 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:35.331851  387609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:35.351704  387609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:35.442132  387609 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:35.442222  387609 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:35.472752  387609 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:35.472770  387609 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:35.472775  387609 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:35.472780  387609 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:35.472785  387609 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:35.472805  387609 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:35.472810  387609 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:35.472814  387609 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:35.472819  387609 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:35.472826  387609 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:35.472831  387609 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:35.472835  387609 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:35.472839  387609 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:35.472842  387609 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:35.472845  387609 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:35.472853  387609 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:35.472856  387609 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:35.472861  387609 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:35.472864  387609 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:35.472867  387609 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:35.472873  387609 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:35.472882  387609 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:35.472887  387609 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:35.472895  387609 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:35.472900  387609 cri.go:96] found id: ""
	I1227 09:07:35.472943  387609 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:35.485949  387609 out.go:203] 
	W1227 09:07:35.486985  387609 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:35.487001  387609 out.go:285] * 
	* 
	W1227 09:07:35.488573  387609 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:35.489564  387609 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-kr2js" [be73a3bf-96ab-4d2d-b467-129d3af57780] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002991552s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (235.344609ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:45.787323  388421 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:45.787580  388421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:45.787589  388421 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:45.787593  388421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:45.787771  388421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:45.788010  388421 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:45.788305  388421 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:45.788327  388421 addons.go:622] checking whether the cluster is paused
	I1227 09:07:45.788414  388421 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:45.788426  388421 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:45.788769  388421 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:45.806201  388421 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:45.806254  388421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:45.823008  388421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:45.911833  388421 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:45.911911  388421 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:45.944434  388421 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:45.944453  388421 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:45.944457  388421 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:45.944461  388421 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:45.944464  388421 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:45.944467  388421 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:45.944470  388421 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:45.944472  388421 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:45.944475  388421 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:45.944483  388421 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:45.944486  388421 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:45.944488  388421 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:45.944491  388421 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:45.944494  388421 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:45.944497  388421 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:45.944501  388421 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:45.944504  388421 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:45.944509  388421 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:45.944511  388421 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:45.944514  388421 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:45.944517  388421 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:45.944521  388421 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:45.944524  388421 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:45.944526  388421 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:45.944529  388421 cri.go:96] found id: ""
	I1227 09:07:45.944567  388421 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:45.959327  388421 out.go:203] 
	W1227 09:07:45.960432  388421 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:45.960454  388421 out.go:285] * 
	* 
	W1227 09:07:45.962251  388421 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:45.963367  388421 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-102660 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-102660 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102660 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [45a78d4a-3de3-4efe-be31-3d680bd4a253] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [45a78d4a-3de3-4efe-be31-3d680bd4a253] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [45a78d4a-3de3-4efe-be31-3d680bd4a253] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00318788s
addons_test.go:969: (dbg) Run:  kubectl --context addons-102660 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 ssh "cat /opt/local-path-provisioner/pvc-2ac3503c-e941-4af0-9082-e5c6e6529b7a_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-102660 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-102660 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (241.648373ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:42.892187  388145 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:42.892485  388145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:42.892501  388145 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:42.892508  388145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:42.893063  388145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:42.893688  388145 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:42.894091  388145 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:42.894116  388145 addons.go:622] checking whether the cluster is paused
	I1227 09:07:42.894221  388145 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:42.894240  388145 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:42.894620  388145 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:42.912203  388145 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:42.912254  388145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:42.930732  388145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:43.018939  388145 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:43.019026  388145 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:43.049871  388145 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:43.049894  388145 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:43.049898  388145 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:43.049902  388145 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:43.049905  388145 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:43.049908  388145 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:43.049911  388145 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:43.049914  388145 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:43.049917  388145 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:43.049924  388145 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:43.049926  388145 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:43.049929  388145 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:43.049932  388145 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:43.049942  388145 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:43.049948  388145 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:43.049956  388145 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:43.049958  388145 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:43.049962  388145 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:43.049967  388145 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:43.049971  388145 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:43.049978  388145 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:43.049983  388145 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:43.049990  388145 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:43.049995  388145 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:43.050002  388145 cri.go:96] found id: ""
	I1227 09:07:43.050051  388145 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:43.065184  388145 out.go:203] 
	W1227 09:07:43.069977  388145 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:43.070002  388145 out.go:285] * 
	* 
	W1227 09:07:43.072276  388145 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:43.073973  388145 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-4jxql" [80e6633f-3dd4-4516-a8c2-040e279afab9] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003424074s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (235.729549ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:38.064451  387758 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:38.064699  387758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:38.064710  387758 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:38.064713  387758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:38.064967  387758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:38.065266  387758 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:38.065645  387758 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:38.065670  387758 addons.go:622] checking whether the cluster is paused
	I1227 09:07:38.065772  387758 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:38.065802  387758 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:38.066239  387758 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:38.084241  387758 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:38.084285  387758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:38.101190  387758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:38.190770  387758 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:38.190887  387758 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:38.218999  387758 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:38.219028  387758 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:38.219032  387758 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:38.219037  387758 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:38.219040  387758 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:38.219044  387758 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:38.219047  387758 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:38.219050  387758 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:38.219053  387758 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:38.219062  387758 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:38.219065  387758 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:38.219068  387758 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:38.219071  387758 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:38.219073  387758 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:38.219077  387758 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:38.219089  387758 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:38.219094  387758 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:38.219098  387758 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:38.219104  387758 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:38.219107  387758 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:38.219110  387758 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:38.219115  387758 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:38.219118  387758 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:38.219121  387758 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:38.219124  387758 cri.go:96] found id: ""
	I1227 09:07:38.219172  387758 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:38.232158  387758 out.go:203] 
	W1227 09:07:38.233125  387758 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:38.233139  387758 out.go:285] * 
	* 
	W1227 09:07:38.234766  387758 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:38.235731  387758 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-shhhd" [ec43ba99-61c7-4e4f-bb7a-2ba3ddcb3b8c] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003568044s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable yakd --alsologtostderr -v=1: exit status 11 (234.278442ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:44.300659  388277 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:44.300928  388277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:44.300938  388277 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:44.300942  388277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:44.301121  388277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:44.301380  388277 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:44.301688  388277 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:44.301708  388277 addons.go:622] checking whether the cluster is paused
	I1227 09:07:44.301787  388277 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:44.301815  388277 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:44.302159  388277 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:44.319599  388277 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:44.319664  388277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:44.336677  388277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:44.428202  388277 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:44.428299  388277 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:44.458123  388277 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:44.458166  388277 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:44.458172  388277 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:44.458177  388277 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:44.458181  388277 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:44.458187  388277 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:44.458189  388277 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:44.458192  388277 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:44.458196  388277 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:44.458216  388277 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:44.458221  388277 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:44.458225  388277 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:44.458230  388277 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:44.458234  388277 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:44.458239  388277 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:44.458250  388277 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:44.458254  388277 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:44.458261  388277 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:44.458265  388277 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:44.458269  388277 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:44.458273  388277 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:44.458276  388277 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:44.458279  388277 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:44.458282  388277 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:44.458286  388277 cri.go:96] found id: ""
	I1227 09:07:44.458336  388277 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:44.471464  388277 out.go:203] 
	W1227 09:07:44.472358  388277 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:44.472377  388277 out.go:285] * 
	* 
	W1227 09:07:44.474008  388277 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:44.474833  388277 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-77gfj" [52ad5567-bfb5-4109-891b-f498cd21d1b5] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003238343s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102660 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102660 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (226.160667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:40.551587  387882 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:40.551829  387882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:40.551838  387882 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:40.551842  387882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:40.552027  387882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:07:40.552270  387882 mustload.go:66] Loading cluster: addons-102660
	I1227 09:07:40.552570  387882 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:40.552590  387882 addons.go:622] checking whether the cluster is paused
	I1227 09:07:40.552671  387882 config.go:182] Loaded profile config "addons-102660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:07:40.552682  387882 host.go:66] Checking if "addons-102660" exists ...
	I1227 09:07:40.553041  387882 cli_runner.go:164] Run: docker container inspect addons-102660 --format={{.State.Status}}
	I1227 09:07:40.570269  387882 ssh_runner.go:195] Run: systemctl --version
	I1227 09:07:40.570334  387882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-102660
	I1227 09:07:40.586775  387882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/addons-102660/id_rsa Username:docker}
	I1227 09:07:40.674980  387882 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:07:40.675055  387882 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:07:40.702850  387882 cri.go:96] found id: "61cbe8e837adc9e12c09da01b60c56b3939ef24c3edb0fb1a1ff7a1e5217bef5"
	I1227 09:07:40.702878  387882 cri.go:96] found id: "5ce019c3bd795b1d8c52d88aa5efda3f2b2a4e70fe66d66a718ac89d8d94459b"
	I1227 09:07:40.702883  387882 cri.go:96] found id: "bc00ef2c406878efd24b7441ce9684489bac149b8ea06437701d881d8fd1134a"
	I1227 09:07:40.702887  387882 cri.go:96] found id: "b84f056d766ce4f088ef4dd81df4a8a450aacef998043dacfd09c38d5e65a5de"
	I1227 09:07:40.702890  387882 cri.go:96] found id: "c811e10f8822a8d0fa521887326d0b7573c9ca6a2c6a83fbd04910dca60d1bb8"
	I1227 09:07:40.702894  387882 cri.go:96] found id: "a2283184858355bab4f1099219f6ece626fa0e2dc72cf70459da07857800fcd9"
	I1227 09:07:40.702896  387882 cri.go:96] found id: "94a18ba09d75af096aec09c9d6e99a28e229b47060eecfb746d369bcc0776811"
	I1227 09:07:40.702899  387882 cri.go:96] found id: "97eda9aefcf0da0558d7a06ddfc649858709d920ee98ddc5d849001e68c7dd53"
	I1227 09:07:40.702902  387882 cri.go:96] found id: "29b050443a0d0564e3cab361aa8cf5f193f04243d42827bc7903078cc1c63669"
	I1227 09:07:40.702911  387882 cri.go:96] found id: "163a0aaa6a06fd63eeafe4e94dd47e7deda5be49bc2cd72c50c63430dc3daffd"
	I1227 09:07:40.702914  387882 cri.go:96] found id: "8c32e03d7b7ea992d32ff6ed1bd6d76a455e62109935f568357619dc5d27c42d"
	I1227 09:07:40.702917  387882 cri.go:96] found id: "95be8942fae68d3719a732f35c19a653707b2b4233b03af95250f4910bf63bf0"
	I1227 09:07:40.702919  387882 cri.go:96] found id: "0100055c8dfbc5a2d1f43d01dbc4eec9a952e71f2fb34164ec64f5f1f8549569"
	I1227 09:07:40.702922  387882 cri.go:96] found id: "94c6915787e83d2b32d5174224a751c4f59d9a2bd7bba84fdc025220be0b5bf7"
	I1227 09:07:40.702925  387882 cri.go:96] found id: "e5e28883bcce170a2c5a972c277beb2812dfa28ff64d65b063802a2d4dd441b0"
	I1227 09:07:40.702936  387882 cri.go:96] found id: "f8332e2291843684ceb08144892b17749193498e0a119605cc9e26ca9611afa0"
	I1227 09:07:40.702940  387882 cri.go:96] found id: "a7360b2982e804662d1c021be1dfe58a9a695fe80b6c5d49bfb0e40b1ecfe587"
	I1227 09:07:40.702944  387882 cri.go:96] found id: "4901f7247dd76708997164babfe1fed20945e96ba91303ace203b6bfb566f849"
	I1227 09:07:40.702947  387882 cri.go:96] found id: "63d37d0b224df2dcac1ae61236ee850f35706ab817a808c84f20ad3e4cb2a302"
	I1227 09:07:40.702950  387882 cri.go:96] found id: "8dc04e3833a231fa5179c87f14246a81d854de5babe95333b3b15e87bcfcf039"
	I1227 09:07:40.702952  387882 cri.go:96] found id: "9e1905ef463d325bf6957345908cff92bced86fe3aa688b53f1bd2132d21eba5"
	I1227 09:07:40.702955  387882 cri.go:96] found id: "8c753ac1232bc2fd8a5f54e08381a58915380ecedec1d0a778df7de68269a65b"
	I1227 09:07:40.702958  387882 cri.go:96] found id: "3ea9a1cdacfc9634d7733f44b141e168572272cb1d2febf81605572e89c7bb0b"
	I1227 09:07:40.702961  387882 cri.go:96] found id: "16c4fbdade2e6b0e19d541ba379ce4ddd0e77b13d32e05b825055943a7d19bc4"
	I1227 09:07:40.702963  387882 cri.go:96] found id: ""
	I1227 09:07:40.703005  387882 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:07:40.716467  387882 out.go:203] 
	W1227 09:07:40.717366  387882 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:07:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:07:40.717385  387882 out.go:285] * 
	* 
	W1227 09:07:40.719031  387882 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:07:40.720118  387882 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-102660 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.23s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.37s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-143482 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-143482 --output=json --user=testUser: exit status 80 (2.365743884s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ca232235-1061-4dd2-b93a-a31b3955d3de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-143482 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a9d8eab2-0f65-4996-a070-7d9d9151741f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-27T09:21:01Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"3b229a84-ead8-408a-b2b4-d210800b1f6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-143482 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.37s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-143482 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-143482 --output=json --user=testUser: exit status 80 (1.662140906s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"02373e69-df5f-4186-9f78-b1615495b50e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-143482 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"75b8de8f-630c-41c4-a58a-0922f02a21e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-27T09:21:03Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"59037d1c-6149-4ede-8735-884e7daab6cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-143482 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.66s)

                                                
                                    
x
+
TestPause/serial/Pause (5.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-174795 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-174795 --alsologtostderr -v=5: exit status 80 (2.098365883s)

                                                
                                                
-- stdout --
	* Pausing node pause-174795 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:31:29.482205  552470 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:31:29.482508  552470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:31:29.482522  552470 out.go:374] Setting ErrFile to fd 2...
	I1227 09:31:29.482529  552470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:31:29.482862  552470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:31:29.483195  552470 out.go:368] Setting JSON to false
	I1227 09:31:29.483219  552470 mustload.go:66] Loading cluster: pause-174795
	I1227 09:31:29.483767  552470 config.go:182] Loaded profile config "pause-174795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:31:29.484354  552470 cli_runner.go:164] Run: docker container inspect pause-174795 --format={{.State.Status}}
	I1227 09:31:29.504876  552470 host.go:66] Checking if "pause-174795" exists ...
	I1227 09:31:29.505219  552470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:31:29.566927  552470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-27 09:31:29.555576703 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:31:29.567720  552470 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-174795 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 09:31:29.570211  552470 out.go:179] * Pausing node pause-174795 ... 
	I1227 09:31:29.571153  552470 host.go:66] Checking if "pause-174795" exists ...
	I1227 09:31:29.571411  552470 ssh_runner.go:195] Run: systemctl --version
	I1227 09:31:29.571458  552470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-174795
	I1227 09:31:29.590824  552470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/pause-174795/id_rsa Username:docker}
	I1227 09:31:29.685574  552470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:31:29.700610  552470 pause.go:52] kubelet running: true
	I1227 09:31:29.700674  552470 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:31:29.875022  552470 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:31:29.875191  552470 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:31:29.980390  552470 cri.go:96] found id: "b975d7336536e5efcea8ea9dc06ed50c574c3e5ed6df99bd11149b10208da309"
	I1227 09:31:29.980483  552470 cri.go:96] found id: "8a782982e9ef2e0e8def8c7ce58acd1b563b89ecea1122ed0cf5b86a00416acf"
	I1227 09:31:29.980509  552470 cri.go:96] found id: "b0b4fee0b1c730e30a9ae0a208fd8f210bae3628512db98fba50b8441d9bd6c2"
	I1227 09:31:29.980515  552470 cri.go:96] found id: "965a27d1118c331b5db7062d66e00d6ed567564250c8903fa341758b374f9146"
	I1227 09:31:29.980519  552470 cri.go:96] found id: "528c2fa63ed83cec23ae93d5b3d91cb11f694c3f854211e3c5a4b5019745964e"
	I1227 09:31:29.980524  552470 cri.go:96] found id: "01f37880e510cd54c4ee5969744484076b9cec30f33cf150fb6e73162a9f073f"
	I1227 09:31:29.980530  552470 cri.go:96] found id: "c13a4a0721d48b9d41dfe2ff8c5daaa20149037a95ecde103a52610e6c1d9403"
	I1227 09:31:29.980534  552470 cri.go:96] found id: ""
	I1227 09:31:29.980631  552470 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:31:30.000054  552470 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:31:29Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:31:30.150474  552470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:31:30.163376  552470 pause.go:52] kubelet running: false
	I1227 09:31:30.163467  552470 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:31:30.296366  552470 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:31:30.296454  552470 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:31:30.371958  552470 cri.go:96] found id: "b975d7336536e5efcea8ea9dc06ed50c574c3e5ed6df99bd11149b10208da309"
	I1227 09:31:30.371978  552470 cri.go:96] found id: "8a782982e9ef2e0e8def8c7ce58acd1b563b89ecea1122ed0cf5b86a00416acf"
	I1227 09:31:30.371984  552470 cri.go:96] found id: "b0b4fee0b1c730e30a9ae0a208fd8f210bae3628512db98fba50b8441d9bd6c2"
	I1227 09:31:30.371989  552470 cri.go:96] found id: "965a27d1118c331b5db7062d66e00d6ed567564250c8903fa341758b374f9146"
	I1227 09:31:30.371997  552470 cri.go:96] found id: "528c2fa63ed83cec23ae93d5b3d91cb11f694c3f854211e3c5a4b5019745964e"
	I1227 09:31:30.372004  552470 cri.go:96] found id: "01f37880e510cd54c4ee5969744484076b9cec30f33cf150fb6e73162a9f073f"
	I1227 09:31:30.372011  552470 cri.go:96] found id: "c13a4a0721d48b9d41dfe2ff8c5daaa20149037a95ecde103a52610e6c1d9403"
	I1227 09:31:30.372016  552470 cri.go:96] found id: ""
	I1227 09:31:30.372058  552470 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:31:30.615344  552470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:31:30.636495  552470 pause.go:52] kubelet running: false
	I1227 09:31:30.636570  552470 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:31:30.789740  552470 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:31:30.789934  552470 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:31:30.876429  552470 cri.go:96] found id: "b975d7336536e5efcea8ea9dc06ed50c574c3e5ed6df99bd11149b10208da309"
	I1227 09:31:30.876452  552470 cri.go:96] found id: "8a782982e9ef2e0e8def8c7ce58acd1b563b89ecea1122ed0cf5b86a00416acf"
	I1227 09:31:30.876458  552470 cri.go:96] found id: "b0b4fee0b1c730e30a9ae0a208fd8f210bae3628512db98fba50b8441d9bd6c2"
	I1227 09:31:30.876464  552470 cri.go:96] found id: "965a27d1118c331b5db7062d66e00d6ed567564250c8903fa341758b374f9146"
	I1227 09:31:30.876468  552470 cri.go:96] found id: "528c2fa63ed83cec23ae93d5b3d91cb11f694c3f854211e3c5a4b5019745964e"
	I1227 09:31:30.876473  552470 cri.go:96] found id: "01f37880e510cd54c4ee5969744484076b9cec30f33cf150fb6e73162a9f073f"
	I1227 09:31:30.876478  552470 cri.go:96] found id: "c13a4a0721d48b9d41dfe2ff8c5daaa20149037a95ecde103a52610e6c1d9403"
	I1227 09:31:30.876482  552470 cri.go:96] found id: ""
	I1227 09:31:30.876526  552470 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:31:31.261987  552470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:31:31.276779  552470 pause.go:52] kubelet running: false
	I1227 09:31:31.276862  552470 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:31:31.415360  552470 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:31:31.415436  552470 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:31:31.490234  552470 cri.go:96] found id: "b975d7336536e5efcea8ea9dc06ed50c574c3e5ed6df99bd11149b10208da309"
	I1227 09:31:31.490254  552470 cri.go:96] found id: "8a782982e9ef2e0e8def8c7ce58acd1b563b89ecea1122ed0cf5b86a00416acf"
	I1227 09:31:31.490258  552470 cri.go:96] found id: "b0b4fee0b1c730e30a9ae0a208fd8f210bae3628512db98fba50b8441d9bd6c2"
	I1227 09:31:31.490262  552470 cri.go:96] found id: "965a27d1118c331b5db7062d66e00d6ed567564250c8903fa341758b374f9146"
	I1227 09:31:31.490265  552470 cri.go:96] found id: "528c2fa63ed83cec23ae93d5b3d91cb11f694c3f854211e3c5a4b5019745964e"
	I1227 09:31:31.490267  552470 cri.go:96] found id: "01f37880e510cd54c4ee5969744484076b9cec30f33cf150fb6e73162a9f073f"
	I1227 09:31:31.490270  552470 cri.go:96] found id: "c13a4a0721d48b9d41dfe2ff8c5daaa20149037a95ecde103a52610e6c1d9403"
	I1227 09:31:31.490273  552470 cri.go:96] found id: ""
	I1227 09:31:31.490307  552470 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:31:31.504525  552470 out.go:203] 
	W1227 09:31:31.505717  552470 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:31:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:31:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:31:31.505744  552470 out.go:285] * 
	* 
	W1227 09:31:31.508963  552470 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:31:31.510779  552470 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-174795 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-174795
helpers_test.go:244: (dbg) docker inspect pause-174795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e",
	        "Created": "2025-12-27T09:30:43.897847375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 536238,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:30:43.951408389Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e/hostname",
	        "HostsPath": "/var/lib/docker/containers/fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e/hosts",
	        "LogPath": "/var/lib/docker/containers/fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e/fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e-json.log",
	        "Name": "/pause-174795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-174795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-174795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e",
	                "LowerDir": "/var/lib/docker/overlay2/bb743371dc7f85d19dd913da73ed97f5db45f7be44168311cb726c510acf15a9-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb743371dc7f85d19dd913da73ed97f5db45f7be44168311cb726c510acf15a9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb743371dc7f85d19dd913da73ed97f5db45f7be44168311cb726c510acf15a9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb743371dc7f85d19dd913da73ed97f5db45f7be44168311cb726c510acf15a9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-174795",
	                "Source": "/var/lib/docker/volumes/pause-174795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-174795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-174795",
	                "name.minikube.sigs.k8s.io": "pause-174795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "64b2cf89e9c902bdff177078c723955e16a39afb97df89c31fc040050b88b826",
	            "SandboxKey": "/var/run/docker/netns/64b2cf89e9c9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-174795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "829fcc9c5d238427e4b76ddfe3d2ba33dd3ee39ce0a6fdd1993cc878d7016258",
	                    "EndpointID": "6aafbb01d09f971de4aba863b6bc41b4320ec0eda8a885733c1a52452bf5a8ea",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:93:2d:ce:7d:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-174795",
	                        "fb583517a621"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-174795 -n pause-174795
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-174795 -n pause-174795: exit status 2 (356.353174ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-174795 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-174795 logs -n 25: (1.015841496s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────┬──────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                     ARGS                                      │   PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────┼──────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p false-157923 sudo iptables -t nat -L -n -v                                 │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl status kubelet --all --full --no-pager         │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl cat kubelet --no-pager                         │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo journalctl -xeu kubelet --all --full --no-pager          │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cat /etc/kubernetes/kubelet.conf                         │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cat /var/lib/kubelet/config.yaml                         │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl status docker --all --full --no-pager          │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl cat docker --no-pager                          │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cat /etc/docker/daemon.json                              │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo docker system info                                       │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl status cri-docker --all --full --no-pager      │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl cat cri-docker --no-pager                      │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cat /usr/lib/systemd/system/cri-docker.service           │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cri-dockerd --version                                    │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl status containerd --all --full --no-pager      │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl cat containerd --no-pager                      │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cat /lib/systemd/system/containerd.service               │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cat /etc/containerd/config.toml                          │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo containerd config dump                                   │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl status crio --all --full --no-pager            │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl cat crio --no-pager                            │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo crio config                                              │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ delete  │ -p false-157923                                                               │ false-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │ 27 Dec 25 09:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────┴──────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:31:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:31:29.745259  552684 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:31:29.746010  552684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:31:29.746015  552684 out.go:374] Setting ErrFile to fd 2...
	I1227 09:31:29.746021  552684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:31:29.746343  552684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:31:29.746957  552684 out.go:368] Setting JSON to false
	I1227 09:31:29.748370  552684 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4434,"bootTime":1766823456,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:31:29.748445  552684 start.go:143] virtualization: kvm guest
	I1227 09:31:29.751102  552684 out.go:179] * [NoKubernetes-397662] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:31:29.752237  552684 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:31:29.752243  552684 notify.go:221] Checking for updates...
	I1227 09:31:29.754333  552684 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:31:29.755501  552684 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:31:29.756675  552684 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:31:29.757950  552684 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:31:29.759175  552684 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:31:29.761147  552684 config.go:182] Loaded profile config "NoKubernetes-397662": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1227 09:31:29.761957  552684 start.go:1810] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1227 09:31:29.762003  552684 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:31:29.799178  552684 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:31:29.799274  552684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:31:29.866838  552684 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-27 09:31:29.857223541 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:31:29.866936  552684 docker.go:319] overlay module found
	I1227 09:31:29.868875  552684 out.go:179] * Using the docker driver based on existing profile
	I1227 09:31:29.870103  552684 start.go:309] selected driver: docker
	I1227 09:31:29.870114  552684 start.go:928] validating driver "docker" against &{Name:NoKubernetes-397662 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-397662 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:31:29.870213  552684 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:31:29.870314  552684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:31:29.952734  552684 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-27 09:31:29.93927667 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:31:29.953659  552684 cni.go:84] Creating CNI manager for ""
	I1227 09:31:29.953745  552684 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:31:29.953824  552684 start.go:353] cluster config:
	{Name:NoKubernetes-397662 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-397662 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:31:29.955484  552684 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-397662
	I1227 09:31:29.956597  552684 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:31:29.958199  552684 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:31:29.959526  552684 preload.go:188] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1227 09:31:29.959714  552684 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:31:29.993743  552684 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:31:29.993758  552684 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	W1227 09:31:30.261334  552684 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1227 09:31:30.473110  552684 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1227 09:31:30.473268  552684 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/NoKubernetes-397662/config.json ...
	I1227 09:31:30.474292  552684 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:31:30.474347  552684 start.go:360] acquireMachinesLock for NoKubernetes-397662: {Name:mk4627a2ed4c9cebfe17f0db60c5fb16b072ab8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:31:30.474413  552684 start.go:364] duration metric: took 47.802µs to acquireMachinesLock for "NoKubernetes-397662"
	I1227 09:31:30.474426  552684 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:31:30.474433  552684 fix.go:54] fixHost starting: 
	I1227 09:31:30.474764  552684 cli_runner.go:164] Run: docker container inspect NoKubernetes-397662 --format={{.State.Status}}
	I1227 09:31:30.498601  552684 fix.go:112] recreateIfNeeded on NoKubernetes-397662: state=Stopped err=<nil>
	W1227 09:31:30.498626  552684 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.094819165Z" level=info msg="RDT not available in the host system"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.09483494Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.095599566Z" level=info msg="Conmon does support the --sync option"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.095616165Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.095635405Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.096316767Z" level=info msg="Conmon does support the --sync option"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.09633643Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.100522056Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.100545996Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.101140685Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.101532586Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.101591552Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.19216703Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-89wgq Namespace:kube-system ID:9c3f7db7c0f3e365fffaa6ee445f4c1598f4150727b115ef8aba1d257559a515 UID:d9d69cbd-cc63-4c35-ac3d-ce5778fb32d0 NetNS:/var/run/netns/582b5199-697b-4ad6-ab54-14e022611c44 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00080f190}] Aliases:map[]}"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.192441484Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-89wgq for CNI network kindnet (type=ptp)"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.192972031Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.192996556Z" level=info msg="Starting seccomp notifier watcher"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.193046778Z" level=info msg="Create NRI interface"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195130707Z" level=info msg="built-in NRI default validator is disabled"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195152228Z" level=info msg="runtime interface created"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195166408Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195173203Z" level=info msg="runtime interface starting up..."
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195181227Z" level=info msg="starting plugins..."
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195198508Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195567215Z" level=info msg="No systemd watchdog enabled"
	Dec 27 09:31:26 pause-174795 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b975d7336536e       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     12 seconds ago      Running             coredns                   0                   9c3f7db7c0f3e       coredns-7d764666f9-89wgq               kube-system
	8a782982e9ef2       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   23 seconds ago      Running             kindnet-cni               0                   5db6bf7ac8653       kindnet-g5wbg                          kube-system
	b0b4fee0b1c73       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     25 seconds ago      Running             kube-proxy                0                   095ab0b60d730       kube-proxy-48vr2                       kube-system
	965a27d1118c3       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     36 seconds ago      Running             etcd                      0                   f525f6128ea30       etcd-pause-174795                      kube-system
	528c2fa63ed83       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     36 seconds ago      Running             kube-scheduler            0                   49bb9c7c239e8       kube-scheduler-pause-174795            kube-system
	01f37880e510c       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     36 seconds ago      Running             kube-apiserver            0                   ad856476a68e3       kube-apiserver-pause-174795            kube-system
	c13a4a0721d48       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     36 seconds ago      Running             kube-controller-manager   0                   6d090537f7524       kube-controller-manager-pause-174795   kube-system
	
	
	==> coredns [b975d7336536e5efcea8ea9dc06ed50c574c3e5ed6df99bd11149b10208da309] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36613 - 24727 "HINFO IN 4646754786077994241.5643036743240327352. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072466051s
	
	
	==> describe nodes <==
	Name:               pause-174795
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-174795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=pause-174795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_31_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:30:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-174795
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:31:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:31:19 +0000   Sat, 27 Dec 2025 09:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:31:19 +0000   Sat, 27 Dec 2025 09:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:31:19 +0000   Sat, 27 Dec 2025 09:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:31:19 +0000   Sat, 27 Dec 2025 09:31:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-174795
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                a7aed780-fa36-45c2-964d-ecdbcb5f10be
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-89wgq                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-174795                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-g5wbg                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-174795             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-174795    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-48vr2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-174795             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node pause-174795 event: Registered Node pause-174795 in Controller
	
	
	==> dmesg <==
	[  +5.107432] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [965a27d1118c331b5db7062d66e00d6ed567564250c8903fa341758b374f9146] <==
	{"level":"warn","ts":"2025-12-27T09:31:05.603001Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.811051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" limit:1 ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2025-12-27T09:31:05.603049Z","caller":"traceutil/trace.go:172","msg":"trace[1229698832] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:299; }","duration":"135.870832ms","start":"2025-12-27T09:31:05.467168Z","end":"2025-12-27T09:31:05.603039Z","steps":["trace[1229698832] 'range keys from in-memory index tree'  (duration: 124.152141ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.603116Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.26675ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357589045143119 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-174795.1885089524326247\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-174795.1885089524326247\" value_size:597 lease:6414985552190367093 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-27T09:31:05.603158Z","caller":"traceutil/trace.go:172","msg":"trace[709271455] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"136.157883ms","start":"2025-12-27T09:31:05.466989Z","end":"2025-12-27T09:31:05.603147Z","steps":["trace[709271455] 'process raft request'  (duration: 11.823073ms)","trace[709271455] 'compare'  (duration: 123.768142ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T09:31:05.640871Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.646245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-12-27T09:31:05.640924Z","caller":"traceutil/trace.go:172","msg":"trace[700167849] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:300; }","duration":"170.714591ms","start":"2025-12-27T09:31:05.470199Z","end":"2025-12-27T09:31:05.640914Z","steps":["trace[700167849] 'agreement among raft nodes before linearized reading'  (duration: 170.549861ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:31:05.640767Z","caller":"traceutil/trace.go:172","msg":"trace[1098471966] linearizableReadLoop","detail":"{readStateIndex:312; appliedIndex:312; }","duration":"162.010273ms","start":"2025-12-27T09:31:05.478730Z","end":"2025-12-27T09:31:05.640741Z","steps":["trace[1098471966] 'read index received'  (duration: 162.001674ms)","trace[1098471966] 'applied index is now lower than readState.Index'  (duration: 7.611µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T09:31:05.641324Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.1971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-12-27T09:31:05.643004Z","caller":"traceutil/trace.go:172","msg":"trace[377871606] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:301; }","duration":"174.927109ms","start":"2025-12-27T09:31:05.468062Z","end":"2025-12-27T09:31:05.642989Z","steps":["trace[377871606] 'agreement among raft nodes before linearized reading'  (duration: 173.006069ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.641413Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.935207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-12-27T09:31:05.643145Z","caller":"traceutil/trace.go:172","msg":"trace[550007437] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:301; }","duration":"174.673231ms","start":"2025-12-27T09:31:05.468463Z","end":"2025-12-27T09:31:05.643137Z","steps":["trace[550007437] 'agreement among raft nodes before linearized reading'  (duration: 172.72495ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:31:05.641117Z","caller":"traceutil/trace.go:172","msg":"trace[527544858] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"170.823124ms","start":"2025-12-27T09:31:05.470260Z","end":"2025-12-27T09:31:05.641084Z","steps":["trace[527544858] 'process raft request'  (duration: 170.489613ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:31:05.642479Z","caller":"traceutil/trace.go:172","msg":"trace[320369843] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"154.307152ms","start":"2025-12-27T09:31:05.488161Z","end":"2025-12-27T09:31:05.642468Z","steps":["trace[320369843] 'process raft request'  (duration: 154.237725ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.643605Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.942181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" limit:1 ","response":"range_response_count:1 size:197"}
	{"level":"info","ts":"2025-12-27T09:31:05.643633Z","caller":"traceutil/trace.go:172","msg":"trace[699437485] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:302; }","duration":"124.985796ms","start":"2025-12-27T09:31:05.518640Z","end":"2025-12-27T09:31:05.643626Z","steps":["trace[699437485] 'agreement among raft nodes before linearized reading'  (duration: 124.89014ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.643779Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.437834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-174795\" limit:1 ","response":"range_response_count:1 size:4975"}
	{"level":"info","ts":"2025-12-27T09:31:05.643826Z","caller":"traceutil/trace.go:172","msg":"trace[554193686] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-174795; range_end:; response_count:1; response_revision:302; }","duration":"156.489663ms","start":"2025-12-27T09:31:05.487329Z","end":"2025-12-27T09:31:05.643819Z","steps":["trace[554193686] 'agreement among raft nodes before linearized reading'  (duration: 156.380642ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.644183Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.034871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-174795\" limit:1 ","response":"range_response_count:1 size:7820"}
	{"level":"info","ts":"2025-12-27T09:31:05.644216Z","caller":"traceutil/trace.go:172","msg":"trace[1621207227] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-174795; range_end:; response_count:1; response_revision:302; }","duration":"157.072036ms","start":"2025-12-27T09:31:05.487137Z","end":"2025-12-27T09:31:05.644209Z","steps":["trace[1621207227] 'agreement among raft nodes before linearized reading'  (duration: 157.006758ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.644574Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.223434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-174795\" limit:1 ","response":"range_response_count:1 size:5986"}
	{"level":"info","ts":"2025-12-27T09:31:05.644617Z","caller":"traceutil/trace.go:172","msg":"trace[67364716] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-174795; range_end:; response_count:1; response_revision:302; }","duration":"157.269679ms","start":"2025-12-27T09:31:05.487339Z","end":"2025-12-27T09:31:05.644609Z","steps":["trace[67364716] 'agreement among raft nodes before linearized reading'  (duration: 156.582492ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.644918Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.254344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/pause-174795\" limit:1 ","response":"range_response_count:1 size:706"}
	{"level":"info","ts":"2025-12-27T09:31:05.644962Z","caller":"traceutil/trace.go:172","msg":"trace[2043764900] range","detail":"{range_begin:/registry/csinodes/pause-174795; range_end:; response_count:1; response_revision:302; }","duration":"158.043928ms","start":"2025-12-27T09:31:05.486909Z","end":"2025-12-27T09:31:05.644953Z","steps":["trace[2043764900] 'agreement among raft nodes before linearized reading'  (duration: 156.080783ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.645296Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.267513ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-174795\" limit:1 ","response":"range_response_count:1 size:7493"}
	{"level":"info","ts":"2025-12-27T09:31:05.645338Z","caller":"traceutil/trace.go:172","msg":"trace[1737732446] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-174795; range_end:; response_count:1; response_revision:302; }","duration":"158.312937ms","start":"2025-12-27T09:31:05.487017Z","end":"2025-12-27T09:31:05.645330Z","steps":["trace[1737732446] 'agreement among raft nodes before linearized reading'  (duration: 157.489036ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:31:32 up  1:13,  0 user,  load average: 5.48, 2.29, 1.77
	Linux pause-174795 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a782982e9ef2e0e8def8c7ce58acd1b563b89ecea1122ed0cf5b86a00416acf] <==
	I1227 09:31:09.327052       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:31:09.327316       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 09:31:09.327479       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:31:09.327507       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:31:09.327531       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:31:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:31:09.530603       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:31:09.530647       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:31:09.530659       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:31:09.622089       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:31:10.121973       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:31:10.122004       1 metrics.go:72] Registering metrics
	I1227 09:31:10.122090       1 controller.go:711] "Syncing nftables rules"
	I1227 09:31:19.530902       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:31:19.530983       1 main.go:301] handling current node
	I1227 09:31:29.537889       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:31:29.537934       1 main.go:301] handling current node
	
	
	==> kube-apiserver [01f37880e510cd54c4ee5969744484076b9cec30f33cf150fb6e73162a9f073f] <==
	I1227 09:30:58.432255       1 shared_informer.go:377] "Caches are synced"
	I1227 09:30:58.435913       1 shared_informer.go:377] "Caches are synced"
	I1227 09:30:58.437239       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:30:58.437480       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:30:58.445946       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:30:58.446309       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 09:30:58.449445       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:30:58.463485       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:30:59.332142       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 09:30:59.335835       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 09:30:59.335853       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:30:59.849335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:30:59.886062       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:30:59.952552       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 09:30:59.959780       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 09:30:59.960821       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:30:59.966711       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:31:00.412601       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:31:00.745648       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:31:00.765153       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 09:31:00.777517       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 09:31:06.023604       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1227 09:31:06.078748       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:31:06.086635       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:31:06.327238       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c13a4a0721d48b9d41dfe2ff8c5daaa20149037a95ecde103a52610e6c1d9403] <==
	I1227 09:31:05.482967       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.482986       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.483016       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.483352       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.484033       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.484056       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.484119       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.484167       1 range_allocator.go:177] "Sending events to api server"
	I1227 09:31:05.484194       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 09:31:05.484199       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:31:05.484202       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485289       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485312       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485344       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485347       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485431       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485438       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485447       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:31:05.485452       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:31:05.485528       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485649       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.491686       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.511436       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.646920       1 range_allocator.go:433] "Set node PodCIDR" node="pause-174795" podCIDRs=["10.244.0.0/24"]
	I1227 09:31:20.466468       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [b0b4fee0b1c730e30a9ae0a208fd8f210bae3628512db98fba50b8441d9bd6c2] <==
	I1227 09:31:07.055412       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:31:07.120972       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:31:07.221705       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:07.221741       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 09:31:07.221836       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:31:07.242771       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:31:07.242826       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:31:07.247687       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:31:07.248055       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:31:07.248080       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:31:07.249547       1 config.go:200] "Starting service config controller"
	I1227 09:31:07.249567       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:31:07.249586       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:31:07.249591       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:31:07.249631       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:31:07.249641       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:31:07.249683       1 config.go:309] "Starting node config controller"
	I1227 09:31:07.249700       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:31:07.249707       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:31:07.350286       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:31:07.350290       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:31:07.350332       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [528c2fa63ed83cec23ae93d5b3d91cb11f694c3f854211e3c5a4b5019745964e] <==
	E1227 09:30:58.452134       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:30:58.452262       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:30:58.452255       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:30:58.452429       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:30:58.452446       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:30:58.452539       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:30:58.452559       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:30:58.452562       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:30:58.452630       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:30:58.452700       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:30:58.452717       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:30:58.452764       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:30:58.452864       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:30:58.453052       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:30:58.453237       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:30:59.363266       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:30:59.397894       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:30:59.483710       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:30:59.489219       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:30:59.552716       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:30:59.635935       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:30:59.637565       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:30:59.650363       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:30:59.671106       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	I1227 09:31:00.016132       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:31:06 pause-174795 kubelet[1296]: I1227 09:31:06.115044    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a2c4505-e238-48ad-b532-6f730cd2d36b-kube-proxy\") pod \"kube-proxy-48vr2\" (UID: \"2a2c4505-e238-48ad-b532-6f730cd2d36b\") " pod="kube-system/kube-proxy-48vr2"
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.225990    1296 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.226035    1296 projected.go:196] Error preparing data for projected volume kube-api-access-vc2tc for pod kube-system/kindnet-g5wbg: configmap "kube-root-ca.crt" not found
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.226120    1296 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1047d45c-b132-440a-a5b7-2ba40ff11397-kube-api-access-vc2tc podName:1047d45c-b132-440a-a5b7-2ba40ff11397 nodeName:}" failed. No retries permitted until 2025-12-27 09:31:06.726090581 +0000 UTC m=+6.042325238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vc2tc" (UniqueName: "kubernetes.io/projected/1047d45c-b132-440a-a5b7-2ba40ff11397-kube-api-access-vc2tc") pod "kindnet-g5wbg" (UID: "1047d45c-b132-440a-a5b7-2ba40ff11397") : configmap "kube-root-ca.crt" not found
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.229400    1296 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.229440    1296 projected.go:196] Error preparing data for projected volume kube-api-access-76smf for pod kube-system/kube-proxy-48vr2: configmap "kube-root-ca.crt" not found
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.229517    1296 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2a2c4505-e238-48ad-b532-6f730cd2d36b-kube-api-access-76smf podName:2a2c4505-e238-48ad-b532-6f730cd2d36b nodeName:}" failed. No retries permitted until 2025-12-27 09:31:06.729487626 +0000 UTC m=+6.045722278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76smf" (UniqueName: "kubernetes.io/projected/2a2c4505-e238-48ad-b532-6f730cd2d36b-kube-api-access-76smf") pod "kube-proxy-48vr2" (UID: "2a2c4505-e238-48ad-b532-6f730cd2d36b") : configmap "kube-root-ca.crt" not found
	Dec 27 09:31:07 pause-174795 kubelet[1296]: I1227 09:31:07.840600    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-48vr2" podStartSLOduration=1.840580337 podStartE2EDuration="1.840580337s" podCreationTimestamp="2025-12-27 09:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:31:07.840568912 +0000 UTC m=+7.156803568" watchObservedRunningTime="2025-12-27 09:31:07.840580337 +0000 UTC m=+7.156814995"
	Dec 27 09:31:08 pause-174795 kubelet[1296]: E1227 09:31:08.331880    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-174795" containerName="kube-apiserver"
	Dec 27 09:31:09 pause-174795 kubelet[1296]: I1227 09:31:09.847199    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-g5wbg" podStartSLOduration=1.680941422 podStartE2EDuration="3.847182816s" podCreationTimestamp="2025-12-27 09:31:06 +0000 UTC" firstStartedPulling="2025-12-27 09:31:06.959278397 +0000 UTC m=+6.275513048" lastFinishedPulling="2025-12-27 09:31:09.125519794 +0000 UTC m=+8.441754442" observedRunningTime="2025-12-27 09:31:09.846947126 +0000 UTC m=+9.163181782" watchObservedRunningTime="2025-12-27 09:31:09.847182816 +0000 UTC m=+9.163417473"
	Dec 27 09:31:11 pause-174795 kubelet[1296]: E1227 09:31:11.987899    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-174795" containerName="etcd"
	Dec 27 09:31:14 pause-174795 kubelet[1296]: E1227 09:31:14.489164    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-174795" containerName="kube-controller-manager"
	Dec 27 09:31:15 pause-174795 kubelet[1296]: E1227 09:31:15.179703    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-174795" containerName="kube-scheduler"
	Dec 27 09:31:18 pause-174795 kubelet[1296]: E1227 09:31:18.336534    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-174795" containerName="kube-apiserver"
	Dec 27 09:31:19 pause-174795 kubelet[1296]: I1227 09:31:19.833308    1296 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 09:31:19 pause-174795 kubelet[1296]: I1227 09:31:19.916345    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnpv8\" (UniqueName: \"kubernetes.io/projected/d9d69cbd-cc63-4c35-ac3d-ce5778fb32d0-kube-api-access-wnpv8\") pod \"coredns-7d764666f9-89wgq\" (UID: \"d9d69cbd-cc63-4c35-ac3d-ce5778fb32d0\") " pod="kube-system/coredns-7d764666f9-89wgq"
	Dec 27 09:31:19 pause-174795 kubelet[1296]: I1227 09:31:19.916400    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9d69cbd-cc63-4c35-ac3d-ce5778fb32d0-config-volume\") pod \"coredns-7d764666f9-89wgq\" (UID: \"d9d69cbd-cc63-4c35-ac3d-ce5778fb32d0\") " pod="kube-system/coredns-7d764666f9-89wgq"
	Dec 27 09:31:20 pause-174795 kubelet[1296]: E1227 09:31:20.858688    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-89wgq" containerName="coredns"
	Dec 27 09:31:20 pause-174795 kubelet[1296]: I1227 09:31:20.868623    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-89wgq" podStartSLOduration=14.86860849 podStartE2EDuration="14.86860849s" podCreationTimestamp="2025-12-27 09:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:31:20.86847511 +0000 UTC m=+20.184709765" watchObservedRunningTime="2025-12-27 09:31:20.86860849 +0000 UTC m=+20.184843145"
	Dec 27 09:31:21 pause-174795 kubelet[1296]: E1227 09:31:21.861221    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-89wgq" containerName="coredns"
	Dec 27 09:31:22 pause-174795 kubelet[1296]: E1227 09:31:22.863922    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-89wgq" containerName="coredns"
	Dec 27 09:31:29 pause-174795 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:31:29 pause-174795 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:31:29 pause-174795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:31:29 pause-174795 systemd[1]: kubelet.service: Consumed 1.293s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-174795 -n pause-174795
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-174795 -n pause-174795: exit status 2 (364.739902ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-174795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-174795
helpers_test.go:244: (dbg) docker inspect pause-174795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e",
	        "Created": "2025-12-27T09:30:43.897847375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 536238,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:30:43.951408389Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e/hostname",
	        "HostsPath": "/var/lib/docker/containers/fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e/hosts",
	        "LogPath": "/var/lib/docker/containers/fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e/fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e-json.log",
	        "Name": "/pause-174795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-174795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-174795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fb583517a621302e25a369bf56648caa74bac98b6bb5ea76ecd637e17456385e",
	                "LowerDir": "/var/lib/docker/overlay2/bb743371dc7f85d19dd913da73ed97f5db45f7be44168311cb726c510acf15a9-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb743371dc7f85d19dd913da73ed97f5db45f7be44168311cb726c510acf15a9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb743371dc7f85d19dd913da73ed97f5db45f7be44168311cb726c510acf15a9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb743371dc7f85d19dd913da73ed97f5db45f7be44168311cb726c510acf15a9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-174795",
	                "Source": "/var/lib/docker/volumes/pause-174795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-174795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-174795",
	                "name.minikube.sigs.k8s.io": "pause-174795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "64b2cf89e9c902bdff177078c723955e16a39afb97df89c31fc040050b88b826",
	            "SandboxKey": "/var/run/docker/netns/64b2cf89e9c9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-174795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "829fcc9c5d238427e4b76ddfe3d2ba33dd3ee39ce0a6fdd1993cc878d7016258",
	                    "EndpointID": "6aafbb01d09f971de4aba863b6bc41b4320ec0eda8a885733c1a52452bf5a8ea",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:93:2d:ce:7d:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-174795",
	                        "fb583517a621"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-174795 -n pause-174795
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-174795 -n pause-174795: exit status 2 (366.568638ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-174795 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                     ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p false-157923 sudo systemctl status cri-docker --all --full --no-pager      │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl cat cri-docker --no-pager                      │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cat /usr/lib/systemd/system/cri-docker.service           │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cri-dockerd --version                                    │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl status containerd --all --full --no-pager      │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl cat containerd --no-pager                      │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cat /lib/systemd/system/containerd.service               │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo cat /etc/containerd/config.toml                          │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo containerd config dump                                   │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl status crio --all --full --no-pager            │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo systemctl cat crio --no-pager                            │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p false-157923 sudo crio config                                              │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ delete  │ -p false-157923                                                               │ false-157923  │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │ 27 Dec 25 09:31 UTC │
	│ ssh     │ -p cilium-157923 sudo cat /etc/nsswitch.conf                                  │ cilium-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p cilium-157923 sudo cat /etc/hosts                                          │ cilium-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p cilium-157923 sudo cat /etc/resolv.conf                                    │ cilium-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p cilium-157923 sudo crictl pods                                             │ cilium-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p cilium-157923 sudo crictl ps --all                                         │ cilium-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p cilium-157923 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;  │ cilium-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p cilium-157923 sudo ip a s                                                  │ cilium-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p cilium-157923 sudo ip r s                                                  │ cilium-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p cilium-157923 sudo iptables-save                                           │ cilium-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	│ ssh     │ -p cilium-157923 sudo iptables -t nat -L -n -v                                │ cilium-157923 │ jenkins │ v1.37.0 │ 27 Dec 25 09:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:31:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:31:29.745259  552684 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:31:29.746010  552684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:31:29.746015  552684 out.go:374] Setting ErrFile to fd 2...
	I1227 09:31:29.746021  552684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:31:29.746343  552684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:31:29.746957  552684 out.go:368] Setting JSON to false
	I1227 09:31:29.748370  552684 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4434,"bootTime":1766823456,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:31:29.748445  552684 start.go:143] virtualization: kvm guest
	I1227 09:31:29.751102  552684 out.go:179] * [NoKubernetes-397662] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:31:29.752237  552684 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:31:29.752243  552684 notify.go:221] Checking for updates...
	I1227 09:31:29.754333  552684 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:31:29.755501  552684 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:31:29.756675  552684 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:31:29.757950  552684 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:31:29.759175  552684 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:31:29.761147  552684 config.go:182] Loaded profile config "NoKubernetes-397662": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1227 09:31:29.761957  552684 start.go:1810] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1227 09:31:29.762003  552684 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:31:29.799178  552684 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:31:29.799274  552684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:31:29.866838  552684 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-27 09:31:29.857223541 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:31:29.866936  552684 docker.go:319] overlay module found
	I1227 09:31:29.868875  552684 out.go:179] * Using the docker driver based on existing profile
	I1227 09:31:29.870103  552684 start.go:309] selected driver: docker
	I1227 09:31:29.870114  552684 start.go:928] validating driver "docker" against &{Name:NoKubernetes-397662 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-397662 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:31:29.870213  552684 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:31:29.870314  552684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:31:29.952734  552684 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-27 09:31:29.93927667 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:31:29.953659  552684 cni.go:84] Creating CNI manager for ""
	I1227 09:31:29.953745  552684 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:31:29.953824  552684 start.go:353] cluster config:
	{Name:NoKubernetes-397662 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-397662 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:31:29.955484  552684 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-397662
	I1227 09:31:29.956597  552684 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:31:29.958199  552684 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:31:29.959526  552684 preload.go:188] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1227 09:31:29.959714  552684 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:31:29.993743  552684 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:31:29.993758  552684 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	W1227 09:31:30.261334  552684 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1227 09:31:30.473110  552684 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1227 09:31:30.473268  552684 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/NoKubernetes-397662/config.json ...
	I1227 09:31:30.474292  552684 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:31:30.474347  552684 start.go:360] acquireMachinesLock for NoKubernetes-397662: {Name:mk4627a2ed4c9cebfe17f0db60c5fb16b072ab8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:31:30.474413  552684 start.go:364] duration metric: took 47.802µs to acquireMachinesLock for "NoKubernetes-397662"
	I1227 09:31:30.474426  552684 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:31:30.474433  552684 fix.go:54] fixHost starting: 
	I1227 09:31:30.474764  552684 cli_runner.go:164] Run: docker container inspect NoKubernetes-397662 --format={{.State.Status}}
	I1227 09:31:30.498601  552684 fix.go:112] recreateIfNeeded on NoKubernetes-397662: state=Stopped err=<nil>
	W1227 09:31:30.498626  552684 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:31:29.466081  547714 cli_runner.go:164] Run: docker network inspect stopped-upgrade-196124 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:31:29.487398  547714 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 09:31:29.492018  547714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:31:29.505469  547714 kubeadm.go:884] updating cluster {Name:stopped-upgrade-196124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-196124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:31:29.505595  547714 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1227 09:31:29.505656  547714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:31:29.561589  547714 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:31:29.561613  547714 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:31:29.561676  547714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:31:29.599288  547714 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:31:29.599319  547714 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:31:29.599329  547714 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.32.0 crio true true} ...
	I1227 09:31:29.599447  547714 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-196124 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-196124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:31:29.599531  547714 ssh_runner.go:195] Run: crio config
	I1227 09:31:29.650560  547714 cni.go:84] Creating CNI manager for ""
	I1227 09:31:29.650584  547714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:31:29.650606  547714 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:31:29.650628  547714 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-196124 NodeName:stopped-upgrade-196124 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:31:29.650751  547714 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-196124"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:31:29.650845  547714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1227 09:31:29.661449  547714 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:31:29.661510  547714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:31:29.671167  547714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1227 09:31:29.690692  547714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:31:29.712023  547714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1227 09:31:29.733510  547714 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:31:29.742309  547714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:31:29.757483  547714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:31:29.849240  547714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:31:29.870439  547714 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124 for IP: 192.168.103.2
	I1227 09:31:29.870464  547714 certs.go:195] generating shared ca certs ...
	I1227 09:31:29.870484  547714 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:31:29.870655  547714 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:31:29.870721  547714 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:31:29.870747  547714 certs.go:257] generating profile certs ...
	I1227 09:31:29.870897  547714 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124/client.key
	I1227 09:31:29.870991  547714 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124/apiserver.key.446bae5c
	I1227 09:31:29.871068  547714 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124/proxy-client.key
	I1227 09:31:29.871225  547714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:31:29.871270  547714 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:31:29.871285  547714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:31:29.871326  547714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:31:29.871368  547714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:31:29.871396  547714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:31:29.871460  547714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:31:29.872213  547714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:31:29.911785  547714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:31:29.949429  547714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:31:29.992856  547714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:31:30.019518  547714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:31:30.044640  547714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:31:30.070690  547714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:31:30.095074  547714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:31:30.121782  547714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:31:30.146843  547714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:31:30.172382  547714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:31:30.202306  547714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:31:30.224219  547714 ssh_runner.go:195] Run: openssl version
	I1227 09:31:30.230079  547714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:31:30.240350  547714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:31:30.249595  547714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:31:30.253335  547714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:31:30.253389  547714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:31:30.260638  547714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:31:30.269999  547714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:31:30.279337  547714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:31:30.289445  547714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:31:30.293445  547714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:31:30.293518  547714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:31:30.300780  547714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:31:30.311194  547714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:31:30.320133  547714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:31:30.330066  547714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:31:30.334431  547714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:31:30.334492  547714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:31:30.342337  547714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:31:30.352438  547714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:31:30.356587  547714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:31:30.364084  547714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:31:30.371918  547714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:31:30.379115  547714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:31:30.386057  547714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:31:30.392680  547714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:31:30.399339  547714 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-196124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-196124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:31:30.399438  547714 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:31:30.399483  547714 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:31:30.443237  547714 cri.go:96] found id: ""
	I1227 09:31:30.443308  547714 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:31:30.454733  547714 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:31:30.454750  547714 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:31:30.454818  547714 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:31:30.464360  547714 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:31:30.464978  547714 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-196124" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:31:30.465328  547714 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-196124" cluster setting kubeconfig missing "stopped-upgrade-196124" context setting]
	I1227 09:31:30.465856  547714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:31:30.466924  547714 kapi.go:59] client config for stopped-upgrade-196124: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124/client.key", CAFile:"/home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 09:31:30.467507  547714 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 09:31:30.467528  547714 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 09:31:30.467534  547714 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 09:31:30.467547  547714 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 09:31:30.467557  547714 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 09:31:30.467564  547714 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 09:31:30.468108  547714 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:31:30.478566  547714 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-27 09:31:11.515631068 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-27 09:31:29.730242176 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1227 09:31:30.478589  547714 kubeadm.go:1161] stopping kube-system containers ...
	I1227 09:31:30.478607  547714 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1227 09:31:30.478659  547714 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:31:30.531720  547714 cri.go:96] found id: ""
	I1227 09:31:30.531779  547714 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1227 09:31:30.594566  547714 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:31:30.611238  547714 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5647 Dec 27 09:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Dec 27 09:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec 27 09:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Dec 27 09:31 /etc/kubernetes/scheduler.conf
	
	I1227 09:31:30.611324  547714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:31:30.627160  547714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:31:30.640600  547714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:31:30.656368  547714 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:31:30.656465  547714 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:31:30.673481  547714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:31:30.688823  547714 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:31:30.688886  547714 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:31:30.702325  547714 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:31:30.712898  547714 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 09:31:30.761609  547714 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 09:31:31.989063  547714 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.227412387s)
	I1227 09:31:31.989131  547714 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1227 09:31:32.184731  547714 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 09:31:32.245014  547714 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1227 09:31:32.306499  547714 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:31:32.306576  547714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:31:32.806661  547714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> CRI-O <==
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.094819165Z" level=info msg="RDT not available in the host system"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.09483494Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.095599566Z" level=info msg="Conmon does support the --sync option"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.095616165Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.095635405Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.096316767Z" level=info msg="Conmon does support the --sync option"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.09633643Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.100522056Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.100545996Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.101140685Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.101532586Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.101591552Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.19216703Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-89wgq Namespace:kube-system ID:9c3f7db7c0f3e365fffaa6ee445f4c1598f4150727b115ef8aba1d257559a515 UID:d9d69cbd-cc63-4c35-ac3d-ce5778fb32d0 NetNS:/var/run/netns/582b5199-697b-4ad6-ab54-14e022611c44 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00080f190}] Aliases:map[]}"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.192441484Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-89wgq for CNI network kindnet (type=ptp)"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.192972031Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.192996556Z" level=info msg="Starting seccomp notifier watcher"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.193046778Z" level=info msg="Create NRI interface"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195130707Z" level=info msg="built-in NRI default validator is disabled"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195152228Z" level=info msg="runtime interface created"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195166408Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195173203Z" level=info msg="runtime interface starting up..."
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195181227Z" level=info msg="starting plugins..."
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195198508Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 09:31:26 pause-174795 crio[2209]: time="2025-12-27T09:31:26.195567215Z" level=info msg="No systemd watchdog enabled"
	Dec 27 09:31:26 pause-174795 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b975d7336536e       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     14 seconds ago      Running             coredns                   0                   9c3f7db7c0f3e       coredns-7d764666f9-89wgq               kube-system
	8a782982e9ef2       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   25 seconds ago      Running             kindnet-cni               0                   5db6bf7ac8653       kindnet-g5wbg                          kube-system
	b0b4fee0b1c73       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     27 seconds ago      Running             kube-proxy                0                   095ab0b60d730       kube-proxy-48vr2                       kube-system
	965a27d1118c3       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     38 seconds ago      Running             etcd                      0                   f525f6128ea30       etcd-pause-174795                      kube-system
	528c2fa63ed83       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     38 seconds ago      Running             kube-scheduler            0                   49bb9c7c239e8       kube-scheduler-pause-174795            kube-system
	01f37880e510c       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     38 seconds ago      Running             kube-apiserver            0                   ad856476a68e3       kube-apiserver-pause-174795            kube-system
	c13a4a0721d48       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     38 seconds ago      Running             kube-controller-manager   0                   6d090537f7524       kube-controller-manager-pause-174795   kube-system
	
	
	==> coredns [b975d7336536e5efcea8ea9dc06ed50c574c3e5ed6df99bd11149b10208da309] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36613 - 24727 "HINFO IN 4646754786077994241.5643036743240327352. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072466051s
	
	
	==> describe nodes <==
	Name:               pause-174795
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-174795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=pause-174795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_31_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:30:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-174795
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:31:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:31:19 +0000   Sat, 27 Dec 2025 09:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:31:19 +0000   Sat, 27 Dec 2025 09:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:31:19 +0000   Sat, 27 Dec 2025 09:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:31:19 +0000   Sat, 27 Dec 2025 09:31:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-174795
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                a7aed780-fa36-45c2-964d-ecdbcb5f10be
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-89wgq                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-174795                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-g5wbg                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-174795             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-174795    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-48vr2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-174795             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node pause-174795 event: Registered Node pause-174795 in Controller
	
	
	==> dmesg <==
	[  +5.107432] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [965a27d1118c331b5db7062d66e00d6ed567564250c8903fa341758b374f9146] <==
	{"level":"warn","ts":"2025-12-27T09:31:05.603001Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.811051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" limit:1 ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2025-12-27T09:31:05.603049Z","caller":"traceutil/trace.go:172","msg":"trace[1229698832] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:299; }","duration":"135.870832ms","start":"2025-12-27T09:31:05.467168Z","end":"2025-12-27T09:31:05.603039Z","steps":["trace[1229698832] 'range keys from in-memory index tree'  (duration: 124.152141ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.603116Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.26675ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357589045143119 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-174795.1885089524326247\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-174795.1885089524326247\" value_size:597 lease:6414985552190367093 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-27T09:31:05.603158Z","caller":"traceutil/trace.go:172","msg":"trace[709271455] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"136.157883ms","start":"2025-12-27T09:31:05.466989Z","end":"2025-12-27T09:31:05.603147Z","steps":["trace[709271455] 'process raft request'  (duration: 11.823073ms)","trace[709271455] 'compare'  (duration: 123.768142ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T09:31:05.640871Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.646245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-12-27T09:31:05.640924Z","caller":"traceutil/trace.go:172","msg":"trace[700167849] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:300; }","duration":"170.714591ms","start":"2025-12-27T09:31:05.470199Z","end":"2025-12-27T09:31:05.640914Z","steps":["trace[700167849] 'agreement among raft nodes before linearized reading'  (duration: 170.549861ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:31:05.640767Z","caller":"traceutil/trace.go:172","msg":"trace[1098471966] linearizableReadLoop","detail":"{readStateIndex:312; appliedIndex:312; }","duration":"162.010273ms","start":"2025-12-27T09:31:05.478730Z","end":"2025-12-27T09:31:05.640741Z","steps":["trace[1098471966] 'read index received'  (duration: 162.001674ms)","trace[1098471966] 'applied index is now lower than readState.Index'  (duration: 7.611µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T09:31:05.641324Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.1971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-12-27T09:31:05.643004Z","caller":"traceutil/trace.go:172","msg":"trace[377871606] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:301; }","duration":"174.927109ms","start":"2025-12-27T09:31:05.468062Z","end":"2025-12-27T09:31:05.642989Z","steps":["trace[377871606] 'agreement among raft nodes before linearized reading'  (duration: 173.006069ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.641413Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.935207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-12-27T09:31:05.643145Z","caller":"traceutil/trace.go:172","msg":"trace[550007437] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:301; }","duration":"174.673231ms","start":"2025-12-27T09:31:05.468463Z","end":"2025-12-27T09:31:05.643137Z","steps":["trace[550007437] 'agreement among raft nodes before linearized reading'  (duration: 172.72495ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:31:05.641117Z","caller":"traceutil/trace.go:172","msg":"trace[527544858] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"170.823124ms","start":"2025-12-27T09:31:05.470260Z","end":"2025-12-27T09:31:05.641084Z","steps":["trace[527544858] 'process raft request'  (duration: 170.489613ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:31:05.642479Z","caller":"traceutil/trace.go:172","msg":"trace[320369843] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"154.307152ms","start":"2025-12-27T09:31:05.488161Z","end":"2025-12-27T09:31:05.642468Z","steps":["trace[320369843] 'process raft request'  (duration: 154.237725ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.643605Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.942181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" limit:1 ","response":"range_response_count:1 size:197"}
	{"level":"info","ts":"2025-12-27T09:31:05.643633Z","caller":"traceutil/trace.go:172","msg":"trace[699437485] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:302; }","duration":"124.985796ms","start":"2025-12-27T09:31:05.518640Z","end":"2025-12-27T09:31:05.643626Z","steps":["trace[699437485] 'agreement among raft nodes before linearized reading'  (duration: 124.89014ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.643779Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.437834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-174795\" limit:1 ","response":"range_response_count:1 size:4975"}
	{"level":"info","ts":"2025-12-27T09:31:05.643826Z","caller":"traceutil/trace.go:172","msg":"trace[554193686] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-174795; range_end:; response_count:1; response_revision:302; }","duration":"156.489663ms","start":"2025-12-27T09:31:05.487329Z","end":"2025-12-27T09:31:05.643819Z","steps":["trace[554193686] 'agreement among raft nodes before linearized reading'  (duration: 156.380642ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.644183Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.034871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-174795\" limit:1 ","response":"range_response_count:1 size:7820"}
	{"level":"info","ts":"2025-12-27T09:31:05.644216Z","caller":"traceutil/trace.go:172","msg":"trace[1621207227] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-174795; range_end:; response_count:1; response_revision:302; }","duration":"157.072036ms","start":"2025-12-27T09:31:05.487137Z","end":"2025-12-27T09:31:05.644209Z","steps":["trace[1621207227] 'agreement among raft nodes before linearized reading'  (duration: 157.006758ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.644574Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.223434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-174795\" limit:1 ","response":"range_response_count:1 size:5986"}
	{"level":"info","ts":"2025-12-27T09:31:05.644617Z","caller":"traceutil/trace.go:172","msg":"trace[67364716] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-174795; range_end:; response_count:1; response_revision:302; }","duration":"157.269679ms","start":"2025-12-27T09:31:05.487339Z","end":"2025-12-27T09:31:05.644609Z","steps":["trace[67364716] 'agreement among raft nodes before linearized reading'  (duration: 156.582492ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.644918Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.254344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/pause-174795\" limit:1 ","response":"range_response_count:1 size:706"}
	{"level":"info","ts":"2025-12-27T09:31:05.644962Z","caller":"traceutil/trace.go:172","msg":"trace[2043764900] range","detail":"{range_begin:/registry/csinodes/pause-174795; range_end:; response_count:1; response_revision:302; }","duration":"158.043928ms","start":"2025-12-27T09:31:05.486909Z","end":"2025-12-27T09:31:05.644953Z","steps":["trace[2043764900] 'agreement among raft nodes before linearized reading'  (duration: 156.080783ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:31:05.645296Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.267513ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-174795\" limit:1 ","response":"range_response_count:1 size:7493"}
	{"level":"info","ts":"2025-12-27T09:31:05.645338Z","caller":"traceutil/trace.go:172","msg":"trace[1737732446] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-174795; range_end:; response_count:1; response_revision:302; }","duration":"158.312937ms","start":"2025-12-27T09:31:05.487017Z","end":"2025-12-27T09:31:05.645330Z","steps":["trace[1737732446] 'agreement among raft nodes before linearized reading'  (duration: 157.489036ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:31:34 up  1:13,  0 user,  load average: 5.84, 2.41, 1.81
	Linux pause-174795 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a782982e9ef2e0e8def8c7ce58acd1b563b89ecea1122ed0cf5b86a00416acf] <==
	I1227 09:31:09.327052       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:31:09.327316       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 09:31:09.327479       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:31:09.327507       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:31:09.327531       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:31:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:31:09.530603       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:31:09.530647       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:31:09.530659       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:31:09.622089       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:31:10.121973       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:31:10.122004       1 metrics.go:72] Registering metrics
	I1227 09:31:10.122090       1 controller.go:711] "Syncing nftables rules"
	I1227 09:31:19.530902       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:31:19.530983       1 main.go:301] handling current node
	I1227 09:31:29.537889       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:31:29.537934       1 main.go:301] handling current node
	
	
	==> kube-apiserver [01f37880e510cd54c4ee5969744484076b9cec30f33cf150fb6e73162a9f073f] <==
	I1227 09:30:58.432255       1 shared_informer.go:377] "Caches are synced"
	I1227 09:30:58.435913       1 shared_informer.go:377] "Caches are synced"
	I1227 09:30:58.437239       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:30:58.437480       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:30:58.445946       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:30:58.446309       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 09:30:58.449445       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:30:58.463485       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:30:59.332142       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 09:30:59.335835       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 09:30:59.335853       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:30:59.849335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:30:59.886062       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:30:59.952552       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 09:30:59.959780       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 09:30:59.960821       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:30:59.966711       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:31:00.412601       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:31:00.745648       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:31:00.765153       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 09:31:00.777517       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 09:31:06.023604       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1227 09:31:06.078748       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:31:06.086635       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:31:06.327238       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c13a4a0721d48b9d41dfe2ff8c5daaa20149037a95ecde103a52610e6c1d9403] <==
	I1227 09:31:05.482967       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.482986       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.483016       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.483352       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.484033       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.484056       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.484119       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.484167       1 range_allocator.go:177] "Sending events to api server"
	I1227 09:31:05.484194       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 09:31:05.484199       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:31:05.484202       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485289       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485312       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485344       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485347       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485431       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485438       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485447       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:31:05.485452       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:31:05.485528       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.485649       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.491686       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.511436       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:05.646920       1 range_allocator.go:433] "Set node PodCIDR" node="pause-174795" podCIDRs=["10.244.0.0/24"]
	I1227 09:31:20.466468       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [b0b4fee0b1c730e30a9ae0a208fd8f210bae3628512db98fba50b8441d9bd6c2] <==
	I1227 09:31:07.055412       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:31:07.120972       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:31:07.221705       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:07.221741       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 09:31:07.221836       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:31:07.242771       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:31:07.242826       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:31:07.247687       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:31:07.248055       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:31:07.248080       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:31:07.249547       1 config.go:200] "Starting service config controller"
	I1227 09:31:07.249567       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:31:07.249586       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:31:07.249591       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:31:07.249631       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:31:07.249641       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:31:07.249683       1 config.go:309] "Starting node config controller"
	I1227 09:31:07.249700       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:31:07.249707       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:31:07.350286       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:31:07.350290       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:31:07.350332       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [528c2fa63ed83cec23ae93d5b3d91cb11f694c3f854211e3c5a4b5019745964e] <==
	E1227 09:30:58.452134       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:30:58.452262       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:30:58.452255       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:30:58.452429       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:30:58.452446       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:30:58.452539       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:30:58.452559       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:30:58.452562       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:30:58.452630       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:30:58.452700       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:30:58.452717       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:30:58.452764       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:30:58.452864       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:30:58.453052       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:30:58.453237       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:30:59.363266       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:30:59.397894       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:30:59.483710       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:30:59.489219       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:30:59.552716       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:30:59.635935       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:30:59.637565       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:30:59.650363       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:30:59.671106       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	I1227 09:31:00.016132       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:31:06 pause-174795 kubelet[1296]: I1227 09:31:06.115044    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a2c4505-e238-48ad-b532-6f730cd2d36b-kube-proxy\") pod \"kube-proxy-48vr2\" (UID: \"2a2c4505-e238-48ad-b532-6f730cd2d36b\") " pod="kube-system/kube-proxy-48vr2"
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.225990    1296 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.226035    1296 projected.go:196] Error preparing data for projected volume kube-api-access-vc2tc for pod kube-system/kindnet-g5wbg: configmap "kube-root-ca.crt" not found
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.226120    1296 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1047d45c-b132-440a-a5b7-2ba40ff11397-kube-api-access-vc2tc podName:1047d45c-b132-440a-a5b7-2ba40ff11397 nodeName:}" failed. No retries permitted until 2025-12-27 09:31:06.726090581 +0000 UTC m=+6.042325238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vc2tc" (UniqueName: "kubernetes.io/projected/1047d45c-b132-440a-a5b7-2ba40ff11397-kube-api-access-vc2tc") pod "kindnet-g5wbg" (UID: "1047d45c-b132-440a-a5b7-2ba40ff11397") : configmap "kube-root-ca.crt" not found
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.229400    1296 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.229440    1296 projected.go:196] Error preparing data for projected volume kube-api-access-76smf for pod kube-system/kube-proxy-48vr2: configmap "kube-root-ca.crt" not found
	Dec 27 09:31:06 pause-174795 kubelet[1296]: E1227 09:31:06.229517    1296 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2a2c4505-e238-48ad-b532-6f730cd2d36b-kube-api-access-76smf podName:2a2c4505-e238-48ad-b532-6f730cd2d36b nodeName:}" failed. No retries permitted until 2025-12-27 09:31:06.729487626 +0000 UTC m=+6.045722278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76smf" (UniqueName: "kubernetes.io/projected/2a2c4505-e238-48ad-b532-6f730cd2d36b-kube-api-access-76smf") pod "kube-proxy-48vr2" (UID: "2a2c4505-e238-48ad-b532-6f730cd2d36b") : configmap "kube-root-ca.crt" not found
	Dec 27 09:31:07 pause-174795 kubelet[1296]: I1227 09:31:07.840600    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-48vr2" podStartSLOduration=1.840580337 podStartE2EDuration="1.840580337s" podCreationTimestamp="2025-12-27 09:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:31:07.840568912 +0000 UTC m=+7.156803568" watchObservedRunningTime="2025-12-27 09:31:07.840580337 +0000 UTC m=+7.156814995"
	Dec 27 09:31:08 pause-174795 kubelet[1296]: E1227 09:31:08.331880    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-174795" containerName="kube-apiserver"
	Dec 27 09:31:09 pause-174795 kubelet[1296]: I1227 09:31:09.847199    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-g5wbg" podStartSLOduration=1.680941422 podStartE2EDuration="3.847182816s" podCreationTimestamp="2025-12-27 09:31:06 +0000 UTC" firstStartedPulling="2025-12-27 09:31:06.959278397 +0000 UTC m=+6.275513048" lastFinishedPulling="2025-12-27 09:31:09.125519794 +0000 UTC m=+8.441754442" observedRunningTime="2025-12-27 09:31:09.846947126 +0000 UTC m=+9.163181782" watchObservedRunningTime="2025-12-27 09:31:09.847182816 +0000 UTC m=+9.163417473"
	Dec 27 09:31:11 pause-174795 kubelet[1296]: E1227 09:31:11.987899    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-174795" containerName="etcd"
	Dec 27 09:31:14 pause-174795 kubelet[1296]: E1227 09:31:14.489164    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-174795" containerName="kube-controller-manager"
	Dec 27 09:31:15 pause-174795 kubelet[1296]: E1227 09:31:15.179703    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-174795" containerName="kube-scheduler"
	Dec 27 09:31:18 pause-174795 kubelet[1296]: E1227 09:31:18.336534    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-174795" containerName="kube-apiserver"
	Dec 27 09:31:19 pause-174795 kubelet[1296]: I1227 09:31:19.833308    1296 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 09:31:19 pause-174795 kubelet[1296]: I1227 09:31:19.916345    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnpv8\" (UniqueName: \"kubernetes.io/projected/d9d69cbd-cc63-4c35-ac3d-ce5778fb32d0-kube-api-access-wnpv8\") pod \"coredns-7d764666f9-89wgq\" (UID: \"d9d69cbd-cc63-4c35-ac3d-ce5778fb32d0\") " pod="kube-system/coredns-7d764666f9-89wgq"
	Dec 27 09:31:19 pause-174795 kubelet[1296]: I1227 09:31:19.916400    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9d69cbd-cc63-4c35-ac3d-ce5778fb32d0-config-volume\") pod \"coredns-7d764666f9-89wgq\" (UID: \"d9d69cbd-cc63-4c35-ac3d-ce5778fb32d0\") " pod="kube-system/coredns-7d764666f9-89wgq"
	Dec 27 09:31:20 pause-174795 kubelet[1296]: E1227 09:31:20.858688    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-89wgq" containerName="coredns"
	Dec 27 09:31:20 pause-174795 kubelet[1296]: I1227 09:31:20.868623    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-89wgq" podStartSLOduration=14.86860849 podStartE2EDuration="14.86860849s" podCreationTimestamp="2025-12-27 09:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:31:20.86847511 +0000 UTC m=+20.184709765" watchObservedRunningTime="2025-12-27 09:31:20.86860849 +0000 UTC m=+20.184843145"
	Dec 27 09:31:21 pause-174795 kubelet[1296]: E1227 09:31:21.861221    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-89wgq" containerName="coredns"
	Dec 27 09:31:22 pause-174795 kubelet[1296]: E1227 09:31:22.863922    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-89wgq" containerName="coredns"
	Dec 27 09:31:29 pause-174795 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:31:29 pause-174795 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:31:29 pause-174795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:31:29 pause-174795 systemd[1]: kubelet.service: Consumed 1.293s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-174795 -n pause-174795
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-174795 -n pause-174795: exit status 2 (352.696786ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-174795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-094398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-094398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (245.388261ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:35:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-094398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-094398 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-094398 describe deploy/metrics-server -n kube-system: exit status 1 (56.660947ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-094398 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-094398
helpers_test.go:244: (dbg) docker inspect old-k8s-version-094398:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509",
	        "Created": "2025-12-27T09:34:24.619442272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 596727,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:34:24.652767846Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/hostname",
	        "HostsPath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/hosts",
	        "LogPath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509-json.log",
	        "Name": "/old-k8s-version-094398",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-094398:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-094398",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509",
	                "LowerDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-094398",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-094398/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-094398",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-094398",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-094398",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ef48264d941097132be4e9cb7e11eb93589c887f43e2df4b7d4bd1ffdf6fefb4",
	            "SandboxKey": "/var/run/docker/netns/ef48264d9410",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-094398": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ba531636d5bcd256e5e0c5cc00963300e9aa97dfe2c1fb4eb178390cd3a90b6",
	                    "EndpointID": "6050c6c0118fd64454c75e6e99e331426c2de7a64e10cac3b7873f7a6180aa62",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "56:50:1c:90:96:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-094398",
	                        "bfa8d511275e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094398 -n old-k8s-version-094398
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-094398 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-094398 logs -n 25: (1.039417999s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-868742                                                                                                                                                                                                                  │ force-systemd-flag-868742 │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │ 27 Dec 25 09:32 UTC │
	│ start   │ -p missing-upgrade-949641 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-949641    │ jenkins │ v1.35.0 │ 27 Dec 25 09:32 UTC │ 27 Dec 25 09:32 UTC │
	│ ssh     │ cert-options-318270 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-318270       │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │ 27 Dec 25 09:32 UTC │
	│ ssh     │ -p cert-options-318270 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-318270       │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │ 27 Dec 25 09:32 UTC │
	│ delete  │ -p cert-options-318270                                                                                                                                                                                                                        │ cert-options-318270       │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │ 27 Dec 25 09:32 UTC │
	│ start   │ -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-761172 │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │ 27 Dec 25 09:32 UTC │
	│ start   │ -p missing-upgrade-949641 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-949641    │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │ 27 Dec 25 09:33 UTC │
	│ stop    │ -p kubernetes-upgrade-761172 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-761172 │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │ 27 Dec 25 09:32 UTC │
	│ start   │ -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-761172 │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │ 27 Dec 25 09:33 UTC │
	│ delete  │ -p missing-upgrade-949641                                                                                                                                                                                                                     │ missing-upgrade-949641    │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:33 UTC │
	│ start   │ -p running-upgrade-561421 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                                          │ running-upgrade-561421    │ jenkins │ v1.35.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:33 UTC │
	│ start   │ -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-761172 │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │                     │
	│ start   │ -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-761172 │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:33 UTC │
	│ delete  │ -p kubernetes-upgrade-761172                                                                                                                                                                                                                  │ kubernetes-upgrade-761172 │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:33 UTC │
	│ start   │ -p test-preload-805186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio                                                                                                                  │ test-preload-805186       │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p running-upgrade-561421 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-561421    │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:34 UTC │
	│ delete  │ -p running-upgrade-561421                                                                                                                                                                                                                     │ running-upgrade-561421    │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398    │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:35 UTC │
	│ image   │ test-preload-805186 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                                                                                                                   │ test-preload-805186       │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ stop    │ -p test-preload-805186                                                                                                                                                                                                                        │ test-preload-805186       │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p test-preload-805186 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-805186       │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │                     │
	│ start   │ -p cert-expiration-237269 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-237269    │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p cert-expiration-237269                                                                                                                                                                                                                     │ cert-expiration-237269    │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564        │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-094398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-094398    │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:35:09
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:35:09.298018  605150 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:35:09.298126  605150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:35:09.298136  605150 out.go:374] Setting ErrFile to fd 2...
	I1227 09:35:09.298140  605150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:35:09.298382  605150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:35:09.298961  605150 out.go:368] Setting JSON to false
	I1227 09:35:09.300140  605150 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4653,"bootTime":1766823456,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:35:09.300200  605150 start.go:143] virtualization: kvm guest
	I1227 09:35:09.301888  605150 out.go:179] * [embed-certs-912564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:35:09.303595  605150 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:35:09.303606  605150 notify.go:221] Checking for updates...
	I1227 09:35:09.305558  605150 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:35:09.306579  605150 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:35:09.307583  605150 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:35:09.309073  605150 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:35:09.310124  605150 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:35:09.311612  605150 config.go:182] Loaded profile config "old-k8s-version-094398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 09:35:09.311731  605150 config.go:182] Loaded profile config "stopped-upgrade-196124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 09:35:09.311863  605150 config.go:182] Loaded profile config "test-preload-805186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:35:09.311976  605150 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:35:09.336269  605150 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:35:09.336445  605150 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:35:09.402150  605150 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:35:09.390715118 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:35:09.402308  605150 docker.go:319] overlay module found
	I1227 09:35:09.403509  605150 out.go:179] * Using the docker driver based on user configuration
	I1227 09:35:09.404424  605150 start.go:309] selected driver: docker
	I1227 09:35:09.404440  605150 start.go:928] validating driver "docker" against <nil>
	I1227 09:35:09.404455  605150 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:35:09.405123  605150 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:35:09.462874  605150 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:35:09.453848689 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:35:09.463076  605150 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:35:09.463381  605150 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:35:09.464752  605150 out.go:179] * Using Docker driver with root privileges
	I1227 09:35:09.465701  605150 cni.go:84] Creating CNI manager for ""
	I1227 09:35:09.465777  605150 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:35:09.465812  605150 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:35:09.465882  605150 start.go:353] cluster config:
	{Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:35:09.467052  605150 out.go:179] * Starting "embed-certs-912564" primary control-plane node in "embed-certs-912564" cluster
	I1227 09:35:09.467953  605150 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:35:09.468937  605150 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:35:09.469892  605150 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:35:09.469923  605150 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:35:09.469935  605150 cache.go:65] Caching tarball of preloaded images
	I1227 09:35:09.469971  605150 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:35:09.470033  605150 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:35:09.470049  605150 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:35:09.470173  605150 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/config.json ...
	I1227 09:35:09.470203  605150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/config.json: {Name:mk07ec619d10afd1705ccc02e9d34055555f8458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:09.489873  605150 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:35:09.489892  605150 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:35:09.489909  605150 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:35:09.489949  605150 start.go:360] acquireMachinesLock for embed-certs-912564: {Name:mk61b0f1dd44336f66b7ae60f44b102943279f72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:35:09.490055  605150 start.go:364] duration metric: took 77.144µs to acquireMachinesLock for "embed-certs-912564"
	I1227 09:35:09.490078  605150 start.go:93] Provisioning new machine with config: &{Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:35:09.490174  605150 start.go:125] createHost starting for "" (driver="docker")
	W1227 09:35:08.138868  547714 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 09:35:08.138887  547714 logs.go:123] Gathering logs for kube-apiserver [d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420] ...
	I1227 09:35:08.138899  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420"
	I1227 09:35:08.176922  547714 logs.go:123] Gathering logs for kube-scheduler [5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed] ...
	I1227 09:35:08.176949  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed"
	I1227 09:35:08.255423  547714 logs.go:123] Gathering logs for kube-controller-manager [5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf] ...
	I1227 09:35:08.255455  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf"
	I1227 09:35:10.794866  547714 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 09:35:10.795334  547714 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1227 09:35:10.795396  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 09:35:10.795465  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 09:35:10.833599  547714 cri.go:96] found id: "d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420"
	I1227 09:35:10.833623  547714 cri.go:96] found id: ""
	I1227 09:35:10.833633  547714 logs.go:282] 1 containers: [d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420]
	I1227 09:35:10.833698  547714 ssh_runner.go:195] Run: which crictl
	I1227 09:35:10.837572  547714 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 09:35:10.837634  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 09:35:10.872291  547714 cri.go:96] found id: ""
	I1227 09:35:10.872321  547714 logs.go:282] 0 containers: []
	W1227 09:35:10.872332  547714 logs.go:284] No container was found matching "etcd"
	I1227 09:35:10.872341  547714 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 09:35:10.872404  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 09:35:10.908050  547714 cri.go:96] found id: ""
	I1227 09:35:10.908076  547714 logs.go:282] 0 containers: []
	W1227 09:35:10.908087  547714 logs.go:284] No container was found matching "coredns"
	I1227 09:35:10.908096  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 09:35:10.908153  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 09:35:10.942697  547714 cri.go:96] found id: "5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed"
	I1227 09:35:10.942718  547714 cri.go:96] found id: ""
	I1227 09:35:10.942725  547714 logs.go:282] 1 containers: [5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed]
	I1227 09:35:10.942781  547714 ssh_runner.go:195] Run: which crictl
	I1227 09:35:10.946684  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 09:35:10.946761  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 09:35:10.987680  547714 cri.go:96] found id: ""
	I1227 09:35:10.987705  547714 logs.go:282] 0 containers: []
	W1227 09:35:10.987717  547714 logs.go:284] No container was found matching "kube-proxy"
	I1227 09:35:10.987725  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 09:35:10.987782  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 09:35:11.022773  547714 cri.go:96] found id: "5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf"
	I1227 09:35:11.022809  547714 cri.go:96] found id: ""
	I1227 09:35:11.022821  547714 logs.go:282] 1 containers: [5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf]
	I1227 09:35:11.022873  547714 ssh_runner.go:195] Run: which crictl
	I1227 09:35:11.026858  547714 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 09:35:11.026927  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 09:35:11.064682  547714 cri.go:96] found id: ""
	I1227 09:35:11.064708  547714 logs.go:282] 0 containers: []
	W1227 09:35:11.064718  547714 logs.go:284] No container was found matching "kindnet"
	I1227 09:35:11.064726  547714 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1227 09:35:11.064784  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1227 09:35:11.101961  547714 cri.go:96] found id: ""
	I1227 09:35:11.101991  547714 logs.go:282] 0 containers: []
	W1227 09:35:11.102002  547714 logs.go:284] No container was found matching "storage-provisioner"
	I1227 09:35:11.102014  547714 logs.go:123] Gathering logs for CRI-O ...
	I1227 09:35:11.102030  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 09:35:11.151526  547714 logs.go:123] Gathering logs for container status ...
	I1227 09:35:11.151557  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 09:35:11.190970  547714 logs.go:123] Gathering logs for kubelet ...
	I1227 09:35:11.191001  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 09:35:11.297773  547714 logs.go:123] Gathering logs for dmesg ...
	I1227 09:35:11.297825  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 09:35:11.315339  547714 logs.go:123] Gathering logs for describe nodes ...
	I1227 09:35:11.315370  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 09:35:11.385230  547714 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 09:35:11.385252  547714 logs.go:123] Gathering logs for kube-apiserver [d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420] ...
	I1227 09:35:11.385265  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420"
	I1227 09:35:11.424895  547714 logs.go:123] Gathering logs for kube-scheduler [5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed] ...
	I1227 09:35:11.424929  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed"
	I1227 09:35:11.524277  547714 logs.go:123] Gathering logs for kube-controller-manager [5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf] ...
	I1227 09:35:11.524310  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf"
	W1227 09:35:08.531734  599940 pod_ready.go:104] pod "coredns-7d764666f9-8s7c7" is not "Ready", error: <nil>
	W1227 09:35:10.532421  599940 pod_ready.go:104] pod "coredns-7d764666f9-8s7c7" is not "Ready", error: <nil>
	W1227 09:35:13.031165  599940 pod_ready.go:104] pod "coredns-7d764666f9-8s7c7" is not "Ready", error: <nil>
	I1227 09:35:09.491814  605150 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:35:09.492004  605150 start.go:159] libmachine.API.Create for "embed-certs-912564" (driver="docker")
	I1227 09:35:09.492032  605150 client.go:173] LocalClient.Create starting
	I1227 09:35:09.492081  605150 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:35:09.492120  605150 main.go:144] libmachine: Decoding PEM data...
	I1227 09:35:09.492136  605150 main.go:144] libmachine: Parsing certificate...
	I1227 09:35:09.492190  605150 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:35:09.492208  605150 main.go:144] libmachine: Decoding PEM data...
	I1227 09:35:09.492218  605150 main.go:144] libmachine: Parsing certificate...
	I1227 09:35:09.492552  605150 cli_runner.go:164] Run: docker network inspect embed-certs-912564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:35:09.507818  605150 cli_runner.go:211] docker network inspect embed-certs-912564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:35:09.507871  605150 network_create.go:284] running [docker network inspect embed-certs-912564] to gather additional debugging logs...
	I1227 09:35:09.507885  605150 cli_runner.go:164] Run: docker network inspect embed-certs-912564
	W1227 09:35:09.523669  605150 cli_runner.go:211] docker network inspect embed-certs-912564 returned with exit code 1
	I1227 09:35:09.523695  605150 network_create.go:287] error running [docker network inspect embed-certs-912564]: docker network inspect embed-certs-912564: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-912564 not found
	I1227 09:35:09.523705  605150 network_create.go:289] output of [docker network inspect embed-certs-912564]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-912564 not found
	
	** /stderr **
	I1227 09:35:09.523772  605150 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:35:09.541894  605150 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
	I1227 09:35:09.542843  605150 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-21a699476be6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:e8:d9:95:e6:36} reservation:<nil>}
	I1227 09:35:09.543378  605150 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e97c5356905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:d9:6b:42:f5:e3} reservation:<nil>}
	I1227 09:35:09.544150  605150 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0ba531636d5b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:e4:d5:d9:cd:a6} reservation:<nil>}
	I1227 09:35:09.544898  605150 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-790568d92826 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:12:be:81:66:94:ec} reservation:<nil>}
	I1227 09:35:09.546013  605150 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f0b290}
	I1227 09:35:09.546041  605150 network_create.go:124] attempt to create docker network embed-certs-912564 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1227 09:35:09.546084  605150 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-912564 embed-certs-912564
	I1227 09:35:09.592826  605150 network_create.go:108] docker network embed-certs-912564 192.168.94.0/24 created
	I1227 09:35:09.592854  605150 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-912564" container
	I1227 09:35:09.592935  605150 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:35:09.610172  605150 cli_runner.go:164] Run: docker volume create embed-certs-912564 --label name.minikube.sigs.k8s.io=embed-certs-912564 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:35:09.626925  605150 oci.go:103] Successfully created a docker volume embed-certs-912564
	I1227 09:35:09.626996  605150 cli_runner.go:164] Run: docker run --rm --name embed-certs-912564-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-912564 --entrypoint /usr/bin/test -v embed-certs-912564:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:35:09.992474  605150 oci.go:107] Successfully prepared a docker volume embed-certs-912564
	I1227 09:35:09.992561  605150 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:35:09.992574  605150 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:35:09.992629  605150 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-912564:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:35:13.838837  605150 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-912564:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.846133569s)
	I1227 09:35:13.838873  605150 kic.go:203] duration metric: took 3.84629365s to extract preloaded images to volume ...
	W1227 09:35:13.838968  605150 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:35:13.839018  605150 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:35:13.839069  605150 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:35:13.892215  605150 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-912564 --name embed-certs-912564 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-912564 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-912564 --network embed-certs-912564 --ip 192.168.94.2 --volume embed-certs-912564:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:35:14.171487  605150 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Running}}
	I1227 09:35:14.190918  605150 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:35:14.208726  605150 cli_runner.go:164] Run: docker exec embed-certs-912564 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:35:14.254440  605150 oci.go:144] the created container "embed-certs-912564" has a running status.
	I1227 09:35:14.254473  605150 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa...
	I1227 09:35:14.366081  605150 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:35:14.401449  605150 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:35:14.425182  605150 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:35:14.425206  605150 kic_runner.go:114] Args: [docker exec --privileged embed-certs-912564 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:35:14.480280  605150 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:35:14.504152  605150 machine.go:94] provisionDockerMachine start ...
	I1227 09:35:14.504264  605150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:35:14.525354  605150 main.go:144] libmachine: Using SSH client type: native
	I1227 09:35:14.525747  605150 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 09:35:14.525847  605150 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:35:14.653917  605150 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-912564
	
	I1227 09:35:14.653946  605150 ubuntu.go:182] provisioning hostname "embed-certs-912564"
	I1227 09:35:14.654028  605150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:35:14.676012  605150 main.go:144] libmachine: Using SSH client type: native
	I1227 09:35:14.676310  605150 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 09:35:14.676332  605150 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-912564 && echo "embed-certs-912564" | sudo tee /etc/hostname
	I1227 09:35:14.813165  605150 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-912564
	
	I1227 09:35:14.813243  605150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:35:14.833894  605150 main.go:144] libmachine: Using SSH client type: native
	I1227 09:35:14.834129  605150 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 09:35:14.834159  605150 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-912564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-912564/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-912564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:35:14.955874  605150 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:35:14.955907  605150 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:35:14.955942  605150 ubuntu.go:190] setting up certificates
	I1227 09:35:14.955962  605150 provision.go:84] configureAuth start
	I1227 09:35:14.956017  605150 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:35:14.973608  605150 provision.go:143] copyHostCerts
	I1227 09:35:14.973666  605150 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:35:14.973677  605150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:35:14.973743  605150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:35:14.973874  605150 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:35:14.973886  605150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:35:14.973919  605150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:35:14.973996  605150 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:35:14.974007  605150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:35:14.974036  605150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:35:14.974090  605150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-912564 san=[127.0.0.1 192.168.94.2 embed-certs-912564 localhost minikube]
	I1227 09:35:15.083109  605150 provision.go:177] copyRemoteCerts
	I1227 09:35:15.083177  605150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:35:15.083215  605150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:35:15.100738  605150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:35:15.190686  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:35:15.209271  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 09:35:15.226284  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:35:15.242656  605150 provision.go:87] duration metric: took 286.665676ms to configureAuth
	I1227 09:35:15.242683  605150 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:35:15.242861  605150 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:35:15.242978  605150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:35:15.261351  605150 main.go:144] libmachine: Using SSH client type: native
	I1227 09:35:15.261572  605150 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 09:35:15.261591  605150 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:35:15.519009  605150 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:35:15.519038  605150 machine.go:97] duration metric: took 1.014861596s to provisionDockerMachine
	I1227 09:35:15.519050  605150 client.go:176] duration metric: took 6.027010074s to LocalClient.Create
	I1227 09:35:15.519072  605150 start.go:167] duration metric: took 6.027068986s to libmachine.API.Create "embed-certs-912564"
	I1227 09:35:15.519084  605150 start.go:293] postStartSetup for "embed-certs-912564" (driver="docker")
	I1227 09:35:15.519097  605150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:35:15.519165  605150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:35:15.519220  605150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:35:15.537757  605150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:35:15.629722  605150 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:35:15.633432  605150 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:35:15.633465  605150 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:35:15.633477  605150 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:35:15.633528  605150 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:35:15.633623  605150 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:35:15.633747  605150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:35:15.641572  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:35:15.661311  605150 start.go:296] duration metric: took 142.213408ms for postStartSetup
	I1227 09:35:15.661658  605150 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:35:15.679413  605150 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/config.json ...
	I1227 09:35:15.679684  605150 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:35:15.679735  605150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:35:15.696949  605150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:35:15.784646  605150 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:35:15.789143  605150 start.go:128] duration metric: took 6.29895496s to createHost
	I1227 09:35:15.789167  605150 start.go:83] releasing machines lock for "embed-certs-912564", held for 6.299100324s
	I1227 09:35:15.789236  605150 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:35:15.806186  605150 ssh_runner.go:195] Run: cat /version.json
	I1227 09:35:15.806234  605150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:35:15.806242  605150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:35:15.806330  605150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:35:15.823786  605150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:35:15.824179  605150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:35:15.909501  605150 ssh_runner.go:195] Run: systemctl --version
	I1227 09:35:15.964925  605150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:35:15.999198  605150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:35:16.003625  605150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:35:16.003691  605150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:35:16.027370  605150 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 09:35:16.027394  605150 start.go:496] detecting cgroup driver to use...
	I1227 09:35:16.027435  605150 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:35:16.027483  605150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:35:16.043493  605150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:35:16.055288  605150 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:35:16.055332  605150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:35:16.070490  605150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:35:16.087126  605150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:35:16.164583  605150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:35:16.245252  605150 docker.go:234] disabling docker service ...
	I1227 09:35:16.245323  605150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:35:16.264248  605150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:35:16.276614  605150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:35:16.359353  605150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:35:16.439495  605150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:35:16.451565  605150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:35:16.465343  605150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:35:16.465401  605150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:16.475053  605150 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:35:16.475110  605150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:16.483413  605150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:16.491443  605150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:16.499332  605150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:35:16.506995  605150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:16.514984  605150 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:16.527688  605150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:16.536548  605150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:35:16.543350  605150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:35:16.550007  605150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:35:16.626281  605150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:35:16.759154  605150 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:35:16.759229  605150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:35:16.763114  605150 start.go:574] Will wait 60s for crictl version
	I1227 09:35:16.763164  605150 ssh_runner.go:195] Run: which crictl
	I1227 09:35:16.766547  605150 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:35:16.791453  605150 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:35:16.791537  605150 ssh_runner.go:195] Run: crio --version
	I1227 09:35:16.818384  605150 ssh_runner.go:195] Run: crio --version
	I1227 09:35:16.846064  605150 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:35:14.068874  547714 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 09:35:14.069300  547714 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1227 09:35:14.069363  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 09:35:14.069423  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 09:35:14.108425  547714 cri.go:96] found id: "d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420"
	I1227 09:35:14.108456  547714 cri.go:96] found id: ""
	I1227 09:35:14.108468  547714 logs.go:282] 1 containers: [d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420]
	I1227 09:35:14.108528  547714 ssh_runner.go:195] Run: which crictl
	I1227 09:35:14.112319  547714 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 09:35:14.112377  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 09:35:14.154501  547714 cri.go:96] found id: ""
	I1227 09:35:14.154528  547714 logs.go:282] 0 containers: []
	W1227 09:35:14.154540  547714 logs.go:284] No container was found matching "etcd"
	I1227 09:35:14.154547  547714 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 09:35:14.154602  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 09:35:14.196438  547714 cri.go:96] found id: ""
	I1227 09:35:14.196468  547714 logs.go:282] 0 containers: []
	W1227 09:35:14.196480  547714 logs.go:284] No container was found matching "coredns"
	I1227 09:35:14.196489  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 09:35:14.196555  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 09:35:14.238679  547714 cri.go:96] found id: "5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed"
	I1227 09:35:14.238710  547714 cri.go:96] found id: ""
	I1227 09:35:14.238723  547714 logs.go:282] 1 containers: [5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed]
	I1227 09:35:14.238804  547714 ssh_runner.go:195] Run: which crictl
	I1227 09:35:14.242546  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 09:35:14.242610  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 09:35:14.282105  547714 cri.go:96] found id: ""
	I1227 09:35:14.282134  547714 logs.go:282] 0 containers: []
	W1227 09:35:14.282144  547714 logs.go:284] No container was found matching "kube-proxy"
	I1227 09:35:14.282157  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 09:35:14.282217  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 09:35:14.327964  547714 cri.go:96] found id: "5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf"
	I1227 09:35:14.327987  547714 cri.go:96] found id: ""
	I1227 09:35:14.327997  547714 logs.go:282] 1 containers: [5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf]
	I1227 09:35:14.328053  547714 ssh_runner.go:195] Run: which crictl
	I1227 09:35:14.331894  547714 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 09:35:14.331962  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 09:35:14.366388  547714 cri.go:96] found id: ""
	I1227 09:35:14.366422  547714 logs.go:282] 0 containers: []
	W1227 09:35:14.366433  547714 logs.go:284] No container was found matching "kindnet"
	I1227 09:35:14.366441  547714 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1227 09:35:14.366500  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1227 09:35:14.416344  547714 cri.go:96] found id: ""
	I1227 09:35:14.416374  547714 logs.go:282] 0 containers: []
	W1227 09:35:14.416393  547714 logs.go:284] No container was found matching "storage-provisioner"
	I1227 09:35:14.416406  547714 logs.go:123] Gathering logs for describe nodes ...
	I1227 09:35:14.416431  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 09:35:14.505425  547714 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 09:35:14.505450  547714 logs.go:123] Gathering logs for kube-apiserver [d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420] ...
	I1227 09:35:14.505467  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420"
	I1227 09:35:14.555435  547714 logs.go:123] Gathering logs for kube-scheduler [5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed] ...
	I1227 09:35:14.555468  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed"
	I1227 09:35:14.640374  547714 logs.go:123] Gathering logs for kube-controller-manager [5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf] ...
	I1227 09:35:14.640408  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf"
	I1227 09:35:14.679632  547714 logs.go:123] Gathering logs for CRI-O ...
	I1227 09:35:14.679657  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 09:35:14.736633  547714 logs.go:123] Gathering logs for container status ...
	I1227 09:35:14.736663  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 09:35:14.778890  547714 logs.go:123] Gathering logs for kubelet ...
	I1227 09:35:14.778917  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 09:35:14.891951  547714 logs.go:123] Gathering logs for dmesg ...
	I1227 09:35:14.891980  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 09:35:17.409543  547714 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 09:35:17.409942  547714 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1227 09:35:17.409993  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 09:35:17.410050  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 09:35:17.446888  547714 cri.go:96] found id: "d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420"
	I1227 09:35:17.446911  547714 cri.go:96] found id: ""
	I1227 09:35:17.446920  547714 logs.go:282] 1 containers: [d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420]
	I1227 09:35:17.446971  547714 ssh_runner.go:195] Run: which crictl
	I1227 09:35:17.450620  547714 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 09:35:17.450683  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 09:35:17.486136  547714 cri.go:96] found id: ""
	I1227 09:35:17.486161  547714 logs.go:282] 0 containers: []
	W1227 09:35:17.486171  547714 logs.go:284] No container was found matching "etcd"
	I1227 09:35:17.486189  547714 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 09:35:17.486242  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 09:35:17.521610  547714 cri.go:96] found id: ""
	I1227 09:35:17.521633  547714 logs.go:282] 0 containers: []
	W1227 09:35:17.521643  547714 logs.go:284] No container was found matching "coredns"
	I1227 09:35:17.521650  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 09:35:17.521708  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 09:35:17.557598  547714 cri.go:96] found id: "5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed"
	I1227 09:35:17.557629  547714 cri.go:96] found id: ""
	I1227 09:35:17.557641  547714 logs.go:282] 1 containers: [5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed]
	I1227 09:35:17.557701  547714 ssh_runner.go:195] Run: which crictl
	I1227 09:35:17.561461  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 09:35:17.561528  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 09:35:17.598356  547714 cri.go:96] found id: ""
	I1227 09:35:17.598379  547714 logs.go:282] 0 containers: []
	W1227 09:35:17.598389  547714 logs.go:284] No container was found matching "kube-proxy"
	I1227 09:35:17.598397  547714 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 09:35:17.598457  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 09:35:17.633267  547714 cri.go:96] found id: "5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf"
	I1227 09:35:17.633287  547714 cri.go:96] found id: ""
	I1227 09:35:17.633295  547714 logs.go:282] 1 containers: [5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf]
	I1227 09:35:17.633355  547714 ssh_runner.go:195] Run: which crictl
	I1227 09:35:17.637231  547714 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 09:35:17.637284  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 09:35:17.671762  547714 cri.go:96] found id: ""
	I1227 09:35:17.671804  547714 logs.go:282] 0 containers: []
	W1227 09:35:17.671815  547714 logs.go:284] No container was found matching "kindnet"
	I1227 09:35:17.671824  547714 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1227 09:35:17.671880  547714 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1227 09:35:17.706736  547714 cri.go:96] found id: ""
	I1227 09:35:17.706759  547714 logs.go:282] 0 containers: []
	W1227 09:35:17.706768  547714 logs.go:284] No container was found matching "storage-provisioner"
	I1227 09:35:17.706779  547714 logs.go:123] Gathering logs for describe nodes ...
	I1227 09:35:17.706808  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 09:35:17.767756  547714 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 09:35:17.767774  547714 logs.go:123] Gathering logs for kube-apiserver [d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420] ...
	I1227 09:35:17.767798  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44e2ceac7e3f8f8099038cf6de8f1afba5fbe79f9878ba08988175a51183420"
	I1227 09:35:17.808331  547714 logs.go:123] Gathering logs for kube-scheduler [5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed] ...
	I1227 09:35:17.808356  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5225acb4ada000e08a4b13ab1d7fe7f2369d884adf3bca584ef436bb8d74caed"
	I1227 09:35:17.890107  547714 logs.go:123] Gathering logs for kube-controller-manager [5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf] ...
	I1227 09:35:17.890133  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7c7b503834980891eb2dfade239cc076b6232512e745f64729a2d2a81b6dcf"
	I1227 09:35:17.926286  547714 logs.go:123] Gathering logs for CRI-O ...
	I1227 09:35:17.926318  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 09:35:17.978225  547714 logs.go:123] Gathering logs for container status ...
	I1227 09:35:17.978260  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 09:35:18.021883  547714 logs.go:123] Gathering logs for kubelet ...
	I1227 09:35:18.021917  547714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 09:35:16.847153  605150 cli_runner.go:164] Run: docker network inspect embed-certs-912564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:35:16.865138  605150 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1227 09:35:16.869376  605150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:35:16.879595  605150 kubeadm.go:884] updating cluster {Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:35:16.879742  605150 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:35:16.879819  605150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:35:16.910459  605150 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:35:16.910480  605150 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:35:16.910531  605150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:35:16.934744  605150 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:35:16.934767  605150 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:35:16.934776  605150 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1227 09:35:16.934897  605150 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-912564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:35:16.934981  605150 ssh_runner.go:195] Run: crio config
	I1227 09:35:16.980025  605150 cni.go:84] Creating CNI manager for ""
	I1227 09:35:16.980048  605150 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:35:16.980065  605150 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:35:16.980086  605150 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-912564 NodeName:embed-certs-912564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:35:16.980196  605150 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-912564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:35:16.980254  605150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:35:16.988586  605150 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:35:16.988662  605150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:35:16.996325  605150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 09:35:17.008771  605150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:35:17.023104  605150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 09:35:17.036231  605150 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:35:17.039685  605150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:35:17.049262  605150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:35:17.129570  605150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:35:17.155091  605150 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564 for IP: 192.168.94.2
	I1227 09:35:17.155112  605150 certs.go:195] generating shared ca certs ...
	I1227 09:35:17.155131  605150 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:17.155306  605150 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:35:17.155360  605150 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:35:17.155380  605150 certs.go:257] generating profile certs ...
	I1227 09:35:17.155458  605150 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/client.key
	I1227 09:35:17.155485  605150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/client.crt with IP's: []
	I1227 09:35:17.211580  605150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/client.crt ...
	I1227 09:35:17.211605  605150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/client.crt: {Name:mk3016f71cbc2f5f93fce828a67cde7cefc25bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:17.211780  605150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/client.key ...
	I1227 09:35:17.211809  605150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/client.key: {Name:mk0c61b0d795f3ff70f99b797518794a96b42fc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:17.211932  605150 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key.6601433b
	I1227 09:35:17.211955  605150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.crt.6601433b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1227 09:35:17.326438  605150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.crt.6601433b ...
	I1227 09:35:17.326464  605150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.crt.6601433b: {Name:mk0f92c10dacbc77d4568fa8e3d4a9cb5a03fc14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:17.326651  605150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key.6601433b ...
	I1227 09:35:17.326668  605150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key.6601433b: {Name:mk9da0c021c2e5b423eec0a458ab1cc5279853de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:17.326778  605150 certs.go:382] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.crt.6601433b -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.crt
	I1227 09:35:17.326916  605150 certs.go:386] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key.6601433b -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key
	I1227 09:35:17.327008  605150 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key
	I1227 09:35:17.327030  605150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.crt with IP's: []
	I1227 09:35:17.405824  605150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.crt ...
	I1227 09:35:17.405850  605150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.crt: {Name:mkc276655efb3efb58e40b44c41213c696d939e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:17.406034  605150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key ...
	I1227 09:35:17.406050  605150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key: {Name:mk80490ba14774a2f35cf52483d76421e672b33f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:17.406284  605150 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:35:17.406331  605150 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:35:17.406346  605150 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:35:17.406399  605150 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:35:17.406436  605150 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:35:17.406476  605150 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:35:17.406537  605150 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:35:17.407204  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:35:17.425913  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:35:17.444202  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:35:17.462189  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:35:17.479603  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 09:35:17.498064  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:35:17.516154  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:35:17.534858  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:35:17.553755  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:35:17.573274  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:35:17.592084  605150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:35:17.610277  605150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:35:17.622988  605150 ssh_runner.go:195] Run: openssl version
	I1227 09:35:17.629557  605150 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:35:17.637383  605150 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:35:17.645098  605150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:35:17.648648  605150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:35:17.648704  605150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:35:17.687166  605150 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:35:17.695216  605150 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3771712.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:35:17.703355  605150 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:17.711138  605150 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:35:17.719060  605150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:17.722664  605150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:17.722716  605150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:17.761670  605150 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:35:17.770838  605150 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:35:17.778985  605150 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:35:17.787188  605150 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:35:17.795173  605150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:35:17.798841  605150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:35:17.798889  605150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:35:17.835616  605150 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:35:17.844167  605150 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/377171.pem /etc/ssl/certs/51391683.0
	I1227 09:35:17.851731  605150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:35:17.855300  605150 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:35:17.855359  605150 kubeadm.go:401] StartCluster: {Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:35:17.855454  605150 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:35:17.855520  605150 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:35:17.881225  605150 cri.go:96] found id: ""
	I1227 09:35:17.881287  605150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:35:17.889002  605150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:35:17.896740  605150 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:35:17.896806  605150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:35:17.904434  605150 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:35:17.904454  605150 kubeadm.go:158] found existing configuration files:
	
	I1227 09:35:17.904501  605150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:35:17.911783  605150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:35:17.911830  605150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:35:17.919138  605150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:35:17.927588  605150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:35:17.927641  605150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:35:17.935554  605150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:35:17.943834  605150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:35:17.943882  605150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:35:17.951559  605150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:35:17.958926  605150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:35:17.958974  605150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:35:17.965852  605150 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:35:18.004429  605150 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:35:18.004509  605150 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:35:18.081394  605150 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:35:18.081502  605150 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 09:35:18.081580  605150 kubeadm.go:319] OS: Linux
	I1227 09:35:18.081660  605150 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:35:18.081741  605150 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:35:18.081834  605150 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:35:18.081906  605150 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:35:18.081972  605150 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:35:18.082046  605150 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:35:18.082120  605150 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:35:18.082190  605150 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 09:35:18.146299  605150 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:35:18.146516  605150 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:35:18.146678  605150 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:35:18.155025  605150 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1227 09:35:15.031442  599940 pod_ready.go:104] pod "coredns-7d764666f9-8s7c7" is not "Ready", error: <nil>
	W1227 09:35:17.032406  599940 pod_ready.go:104] pod "coredns-7d764666f9-8s7c7" is not "Ready", error: <nil>
	I1227 09:35:18.157182  605150 out.go:252]   - Generating certificates and keys ...
	I1227 09:35:18.157290  605150 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:35:18.157426  605150 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:35:18.204811  605150 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:35:18.262711  605150 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:35:18.318863  605150 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:35:18.425103  605150 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:35:18.546408  605150 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:35:18.546580  605150 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-912564 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1227 09:35:18.712954  605150 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:35:18.713131  605150 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-912564 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1227 09:35:18.825130  605150 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:35:18.902055  605150 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:35:19.050144  605150 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:35:19.050217  605150 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:35:19.082188  605150 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:35:19.161980  605150 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:35:19.184256  605150 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:35:19.207677  605150 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:35:19.245443  605150 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:35:19.246038  605150 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:35:19.249616  605150 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:35:19.252022  605150 out.go:252]   - Booting up control plane ...
	I1227 09:35:19.252143  605150 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:35:19.252270  605150 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:35:19.252366  605150 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:35:19.266452  605150 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:35:19.266583  605150 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:35:19.272881  605150 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:35:19.273156  605150 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:35:19.273203  605150 kubeadm.go:319] [kubelet-start] Starting the kubelet
	
	
	==> CRI-O <==
	Dec 27 09:35:06 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:06.376117441Z" level=info msg="Starting container: 28ef973ed1a87a4c30d684c5a08267938c18061ad0afb8ade739686f173f9c3c" id=a1c12ff9-bf07-4152-ab37-db452e3fea0b name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:35:06 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:06.378780371Z" level=info msg="Started container" PID=2193 containerID=28ef973ed1a87a4c30d684c5a08267938c18061ad0afb8ade739686f173f9c3c description=kube-system/coredns-5dd5756b68-l2f7v/coredns id=a1c12ff9-bf07-4152-ab37-db452e3fea0b name=/runtime.v1.RuntimeService/StartContainer sandboxID=6dbe5afcdce99a40ac0a228f80f0be22510d94ed1386652c3b41e657959cd1b5
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.688887352Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c823e372-7c20-450c-8850-68198a9da806 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.688973844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.694769442Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:28ac4db907cd62421010a6a22251299b92af412aa6abda73aa71da65f03ce844 UID:27e89ec3-23a4-4ea1-a3ac-26f25e43f5ad NetNS:/var/run/netns/fc782913-0f04-4743-9126-6054bbb7830c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00080e850}] Aliases:map[]}"
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.694834218Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.705603542Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:28ac4db907cd62421010a6a22251299b92af412aa6abda73aa71da65f03ce844 UID:27e89ec3-23a4-4ea1-a3ac-26f25e43f5ad NetNS:/var/run/netns/fc782913-0f04-4743-9126-6054bbb7830c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00080e850}] Aliases:map[]}"
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.705772032Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.706763334Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.707943786Z" level=info msg="Ran pod sandbox 28ac4db907cd62421010a6a22251299b92af412aa6abda73aa71da65f03ce844 with infra container: default/busybox/POD" id=c823e372-7c20-450c-8850-68198a9da806 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.710676741Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b45a0fbc-f2fd-4a36-ad34-64fd6a10e73d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.710827735Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b45a0fbc-f2fd-4a36-ad34-64fd6a10e73d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.710878428Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b45a0fbc-f2fd-4a36-ad34-64fd6a10e73d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.711414574Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=be1700b2-7a4c-4718-8031-2719c64a4899 name=/runtime.v1.ImageService/PullImage
	Dec 27 09:35:09 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:09.712718926Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 09:35:11 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:11.618095351Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=be1700b2-7a4c-4718-8031-2719c64a4899 name=/runtime.v1.ImageService/PullImage
	Dec 27 09:35:11 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:11.61911801Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=128e72d9-ee1e-423c-9b41-95ccecf233e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:35:11 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:11.620825237Z" level=info msg="Creating container: default/busybox/busybox" id=86b009be-397a-447f-89f8-f756dd48e802 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:35:11 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:11.620939075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:35:11 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:11.62502239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:35:11 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:11.62540986Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:35:11 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:11.656993204Z" level=info msg="Created container 0ac7ab64b5ca4ed9b206c11dd7d5dc111d11b42a4547545ac0f85cb9e952159e: default/busybox/busybox" id=86b009be-397a-447f-89f8-f756dd48e802 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:35:11 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:11.657562577Z" level=info msg="Starting container: 0ac7ab64b5ca4ed9b206c11dd7d5dc111d11b42a4547545ac0f85cb9e952159e" id=95a8a1d5-66a7-42f1-9c93-68e65d63efdf name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:35:11 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:11.659276243Z" level=info msg="Started container" PID=2269 containerID=0ac7ab64b5ca4ed9b206c11dd7d5dc111d11b42a4547545ac0f85cb9e952159e description=default/busybox/busybox id=95a8a1d5-66a7-42f1-9c93-68e65d63efdf name=/runtime.v1.RuntimeService/StartContainer sandboxID=28ac4db907cd62421010a6a22251299b92af412aa6abda73aa71da65f03ce844
	Dec 27 09:35:19 old-k8s-version-094398 crio[772]: time="2025-12-27T09:35:19.482626709Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	0ac7ab64b5ca4       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   28ac4db907cd6       busybox                                          default
	28ef973ed1a87       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 seconds ago      Running             coredns                   0                   6dbe5afcdce99       coredns-5dd5756b68-l2f7v                         kube-system
	ef6026e39af97       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   57cd33b312852       storage-provisioner                              kube-system
	a23b1b6ba1d13       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    25 seconds ago      Running             kindnet-cni               0                   cbd1e4ffb064f       kindnet-hb4bf                                    kube-system
	20d145d7424e4       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   eff1e530799b4       kube-proxy-w8h4h                                 kube-system
	25861c63fd1ff       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   ba11bf30bdc03       kube-scheduler-old-k8s-version-094398            kube-system
	be6fd8c13199f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   ec4cbd1e3b9a4       etcd-old-k8s-version-094398                      kube-system
	7aeadb8a3e8e5       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   285a2e222ae7b       kube-controller-manager-old-k8s-version-094398   kube-system
	0aa5a71c25278       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   5d5a7fe4ea77a       kube-apiserver-old-k8s-version-094398            kube-system
	
	
	==> coredns [28ef973ed1a87a4c30d684c5a08267938c18061ad0afb8ade739686f173f9c3c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39680 - 31392 "HINFO IN 7367007287335830062.4152701499437897589. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022798768s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-094398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-094398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=old-k8s-version-094398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_34_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:34:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-094398
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:35:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:35:10 +0000   Sat, 27 Dec 2025 09:34:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:35:10 +0000   Sat, 27 Dec 2025 09:34:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:35:10 +0000   Sat, 27 Dec 2025 09:34:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:35:10 +0000   Sat, 27 Dec 2025 09:35:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-094398
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                2ce0eeb8-0e2e-4c1d-a2fe-ade8e6b7daeb
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-l2f7v                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-094398                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-hb4bf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-094398             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-094398    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-w8h4h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-094398             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node old-k8s-version-094398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-094398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-094398 event: Registered Node old-k8s-version-094398 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-094398 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [be6fd8c13199f386a327af62dc741dcfb3628a89b37eb34bfe29190a224a9941] <==
	{"level":"info","ts":"2025-12-27T09:34:35.12035Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-27T09:34:35.12145Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T09:34:35.121636Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:34:35.121704Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:34:35.121656Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T09:34:35.121687Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T09:34:35.213955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T09:34:35.214135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T09:34:35.214247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T09:34:35.21429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:34:35.214299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T09:34:35.214371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T09:34:35.214407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T09:34:35.215066Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-094398 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:34:35.215165Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:34:35.215223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:34:35.215253Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:34:35.215752Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:34:35.215866Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:34:35.215891Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:34:35.217048Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:34:35.217128Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:34:35.218081Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:34:35.21909Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T09:35:12.615317Z","caller":"traceutil/trace.go:171","msg":"trace[708855109] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"177.990615ms","start":"2025-12-27T09:35:12.437304Z","end":"2025-12-27T09:35:12.615295Z","steps":["trace[708855109] 'process raft request'  (duration: 177.868529ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:35:20 up  1:17,  0 user,  load average: 3.05, 3.07, 2.23
	Linux old-k8s-version-094398 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a23b1b6ba1d133376deecea67313d9c194b0a30a4323e2093964a23741ea9d99] <==
	I1227 09:34:55.673781       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:34:55.674046       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 09:34:55.674185       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:34:55.674209       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:34:55.674235       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:34:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:34:55.879811       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:34:55.879840       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:34:55.879851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:34:55.880019       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:34:56.180485       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:34:56.180508       1 metrics.go:72] Registering metrics
	I1227 09:34:56.180575       1 controller.go:711] "Syncing nftables rules"
	I1227 09:35:05.888755       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:35:05.888831       1 main.go:301] handling current node
	I1227 09:35:15.880684       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:35:15.880732       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0aa5a71c25278b1891e4daa4a5200f08b2c9daf9e2edf433576adb77746e1b22] <==
	I1227 09:34:36.617608       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:34:36.617672       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 09:34:36.617733       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 09:34:36.617736       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 09:34:36.617939       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 09:34:36.618178       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 09:34:36.618213       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 09:34:36.618662       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 09:34:36.632783       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 09:34:36.667422       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:34:37.523197       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1227 09:34:37.526233       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1227 09:34:37.526249       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 09:34:37.864381       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:34:37.893930       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:34:37.926626       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 09:34:37.931517       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 09:34:37.932481       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 09:34:37.936737       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:34:38.544732       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 09:34:39.273903       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 09:34:39.282691       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 09:34:39.293064       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1227 09:34:52.637974       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1227 09:34:52.787307       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7aeadb8a3e8e53be703047ed7181cbf81d4201b2ec76e0b0e59950ee1e86a8e4] <==
	I1227 09:34:52.192684       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 09:34:52.192712       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 09:34:52.510108       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 09:34:52.549071       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 09:34:52.549098       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 09:34:52.640994       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1227 09:34:52.794205       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w8h4h"
	I1227 09:34:52.795413       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hb4bf"
	I1227 09:34:52.989076       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-4f6cs"
	I1227 09:34:52.994922       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-l2f7v"
	I1227 09:34:53.000541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="359.613051ms"
	I1227 09:34:53.006013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.424691ms"
	I1227 09:34:53.006089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.268µs"
	I1227 09:34:53.007189       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.564µs"
	I1227 09:34:53.614929       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1227 09:34:53.624957       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-4f6cs"
	I1227 09:34:53.632907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.028604ms"
	I1227 09:34:53.638530       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.580002ms"
	I1227 09:34:53.638644       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.458µs"
	I1227 09:35:06.017638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.539µs"
	I1227 09:35:06.034812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="160.022µs"
	I1227 09:35:06.427992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.466µs"
	I1227 09:35:06.987743       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1227 09:35:07.435626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.632746ms"
	I1227 09:35:07.435732       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.888µs"
	
	
	==> kube-proxy [20d145d7424e465c8b7ccfbd0ef7b49219e85100c2dfec5dd4a4ba62a402c744] <==
	I1227 09:34:53.183126       1 server_others.go:69] "Using iptables proxy"
	I1227 09:34:53.192892       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1227 09:34:53.210890       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:34:53.213148       1 server_others.go:152] "Using iptables Proxier"
	I1227 09:34:53.213172       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 09:34:53.213178       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 09:34:53.213207       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 09:34:53.213479       1 server.go:846] "Version info" version="v1.28.0"
	I1227 09:34:53.213497       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:34:53.214124       1 config.go:188] "Starting service config controller"
	I1227 09:34:53.214145       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 09:34:53.214165       1 config.go:97] "Starting endpoint slice config controller"
	I1227 09:34:53.214168       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 09:34:53.214467       1 config.go:315] "Starting node config controller"
	I1227 09:34:53.214488       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 09:34:53.314535       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 09:34:53.314592       1 shared_informer.go:318] Caches are synced for service config
	I1227 09:34:53.315011       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [25861c63fd1ffb0fcb3df312aa0f30354288a9f342c91f8af72e54852e6fee90] <==
	W1227 09:34:36.590193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1227 09:34:36.590292       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1227 09:34:36.590202       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1227 09:34:36.590309       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1227 09:34:36.590306       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1227 09:34:36.590331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1227 09:34:36.590332       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1227 09:34:36.590345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1227 09:34:37.452490       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1227 09:34:37.452522       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1227 09:34:37.502266       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1227 09:34:37.502293       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:34:37.505461       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1227 09:34:37.505483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1227 09:34:37.550937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1227 09:34:37.550970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1227 09:34:37.566142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1227 09:34:37.566187       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1227 09:34:37.611113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1227 09:34:37.611148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1227 09:34:37.634474       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1227 09:34:37.634522       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1227 09:34:37.651965       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1227 09:34:37.652000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1227 09:34:40.481369       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 09:34:52 old-k8s-version-094398 kubelet[1406]: I1227 09:34:52.106515    1406 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 27 09:34:52 old-k8s-version-094398 kubelet[1406]: I1227 09:34:52.799044    1406 topology_manager.go:215] "Topology Admit Handler" podUID="447cf446-97af-453a-852e-6e459a39939e" podNamespace="kube-system" podName="kube-proxy-w8h4h"
	Dec 27 09:34:52 old-k8s-version-094398 kubelet[1406]: I1227 09:34:52.800525    1406 topology_manager.go:215] "Topology Admit Handler" podUID="f15b8136-82bb-46a5-942e-b7a9f2d21526" podNamespace="kube-system" podName="kindnet-hb4bf"
	Dec 27 09:34:52 old-k8s-version-094398 kubelet[1406]: I1227 09:34:52.821994    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/447cf446-97af-453a-852e-6e459a39939e-kube-proxy\") pod \"kube-proxy-w8h4h\" (UID: \"447cf446-97af-453a-852e-6e459a39939e\") " pod="kube-system/kube-proxy-w8h4h"
	Dec 27 09:34:52 old-k8s-version-094398 kubelet[1406]: I1227 09:34:52.822053    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f15b8136-82bb-46a5-942e-b7a9f2d21526-cni-cfg\") pod \"kindnet-hb4bf\" (UID: \"f15b8136-82bb-46a5-942e-b7a9f2d21526\") " pod="kube-system/kindnet-hb4bf"
	Dec 27 09:34:52 old-k8s-version-094398 kubelet[1406]: I1227 09:34:52.822177    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/447cf446-97af-453a-852e-6e459a39939e-xtables-lock\") pod \"kube-proxy-w8h4h\" (UID: \"447cf446-97af-453a-852e-6e459a39939e\") " pod="kube-system/kube-proxy-w8h4h"
	Dec 27 09:34:52 old-k8s-version-094398 kubelet[1406]: I1227 09:34:52.822228    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76wgt\" (UniqueName: \"kubernetes.io/projected/447cf446-97af-453a-852e-6e459a39939e-kube-api-access-76wgt\") pod \"kube-proxy-w8h4h\" (UID: \"447cf446-97af-453a-852e-6e459a39939e\") " pod="kube-system/kube-proxy-w8h4h"
	Dec 27 09:34:52 old-k8s-version-094398 kubelet[1406]: I1227 09:34:52.822256    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f15b8136-82bb-46a5-942e-b7a9f2d21526-xtables-lock\") pod \"kindnet-hb4bf\" (UID: \"f15b8136-82bb-46a5-942e-b7a9f2d21526\") " pod="kube-system/kindnet-hb4bf"
	Dec 27 09:34:52 old-k8s-version-094398 kubelet[1406]: I1227 09:34:52.822290    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f15b8136-82bb-46a5-942e-b7a9f2d21526-lib-modules\") pod \"kindnet-hb4bf\" (UID: \"f15b8136-82bb-46a5-942e-b7a9f2d21526\") " pod="kube-system/kindnet-hb4bf"
	Dec 27 09:34:52 old-k8s-version-094398 kubelet[1406]: I1227 09:34:52.822382    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/447cf446-97af-453a-852e-6e459a39939e-lib-modules\") pod \"kube-proxy-w8h4h\" (UID: \"447cf446-97af-453a-852e-6e459a39939e\") " pod="kube-system/kube-proxy-w8h4h"
	Dec 27 09:34:52 old-k8s-version-094398 kubelet[1406]: I1227 09:34:52.822473    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfjhs\" (UniqueName: \"kubernetes.io/projected/f15b8136-82bb-46a5-942e-b7a9f2d21526-kube-api-access-dfjhs\") pod \"kindnet-hb4bf\" (UID: \"f15b8136-82bb-46a5-942e-b7a9f2d21526\") " pod="kube-system/kindnet-hb4bf"
	Dec 27 09:34:56 old-k8s-version-094398 kubelet[1406]: I1227 09:34:56.400240    1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-w8h4h" podStartSLOduration=4.400189019 podCreationTimestamp="2025-12-27 09:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:34:53.395811406 +0000 UTC m=+14.145294059" watchObservedRunningTime="2025-12-27 09:34:56.400189019 +0000 UTC m=+17.149671646"
	Dec 27 09:35:05 old-k8s-version-094398 kubelet[1406]: I1227 09:35:05.994521    1406 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 27 09:35:06 old-k8s-version-094398 kubelet[1406]: I1227 09:35:06.015028    1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-hb4bf" podStartSLOduration=11.648318594 podCreationTimestamp="2025-12-27 09:34:52 +0000 UTC" firstStartedPulling="2025-12-27 09:34:53.109838496 +0000 UTC m=+13.859321116" lastFinishedPulling="2025-12-27 09:34:55.476456332 +0000 UTC m=+16.225938944" observedRunningTime="2025-12-27 09:34:56.40038654 +0000 UTC m=+17.149869159" watchObservedRunningTime="2025-12-27 09:35:06.014936422 +0000 UTC m=+26.764419048"
	Dec 27 09:35:06 old-k8s-version-094398 kubelet[1406]: I1227 09:35:06.015500    1406 topology_manager.go:215] "Topology Admit Handler" podUID="ed7a9a14-2a15-40f9-ac80-e99cc2704e98" podNamespace="kube-system" podName="storage-provisioner"
	Dec 27 09:35:06 old-k8s-version-094398 kubelet[1406]: I1227 09:35:06.017415    1406 topology_manager.go:215] "Topology Admit Handler" podUID="f7c50be2-905f-4271-9c61-b65f6ae61096" podNamespace="kube-system" podName="coredns-5dd5756b68-l2f7v"
	Dec 27 09:35:06 old-k8s-version-094398 kubelet[1406]: I1227 09:35:06.123862    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4np\" (UniqueName: \"kubernetes.io/projected/f7c50be2-905f-4271-9c61-b65f6ae61096-kube-api-access-vj4np\") pod \"coredns-5dd5756b68-l2f7v\" (UID: \"f7c50be2-905f-4271-9c61-b65f6ae61096\") " pod="kube-system/coredns-5dd5756b68-l2f7v"
	Dec 27 09:35:06 old-k8s-version-094398 kubelet[1406]: I1227 09:35:06.123938    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ed7a9a14-2a15-40f9-ac80-e99cc2704e98-tmp\") pod \"storage-provisioner\" (UID: \"ed7a9a14-2a15-40f9-ac80-e99cc2704e98\") " pod="kube-system/storage-provisioner"
	Dec 27 09:35:06 old-k8s-version-094398 kubelet[1406]: I1227 09:35:06.123975    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sf24\" (UniqueName: \"kubernetes.io/projected/ed7a9a14-2a15-40f9-ac80-e99cc2704e98-kube-api-access-9sf24\") pod \"storage-provisioner\" (UID: \"ed7a9a14-2a15-40f9-ac80-e99cc2704e98\") " pod="kube-system/storage-provisioner"
	Dec 27 09:35:06 old-k8s-version-094398 kubelet[1406]: I1227 09:35:06.124015    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c50be2-905f-4271-9c61-b65f6ae61096-config-volume\") pod \"coredns-5dd5756b68-l2f7v\" (UID: \"f7c50be2-905f-4271-9c61-b65f6ae61096\") " pod="kube-system/coredns-5dd5756b68-l2f7v"
	Dec 27 09:35:06 old-k8s-version-094398 kubelet[1406]: I1227 09:35:06.425692    1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-l2f7v" podStartSLOduration=14.425589686 podCreationTimestamp="2025-12-27 09:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:35:06.42495674 +0000 UTC m=+27.174439391" watchObservedRunningTime="2025-12-27 09:35:06.425589686 +0000 UTC m=+27.175072315"
	Dec 27 09:35:07 old-k8s-version-094398 kubelet[1406]: I1227 09:35:07.425694    1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.425643739 podCreationTimestamp="2025-12-27 09:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:35:06.441306686 +0000 UTC m=+27.190789343" watchObservedRunningTime="2025-12-27 09:35:07.425643739 +0000 UTC m=+28.175126368"
	Dec 27 09:35:09 old-k8s-version-094398 kubelet[1406]: I1227 09:35:09.386471    1406 topology_manager.go:215] "Topology Admit Handler" podUID="27e89ec3-23a4-4ea1-a3ac-26f25e43f5ad" podNamespace="default" podName="busybox"
	Dec 27 09:35:09 old-k8s-version-094398 kubelet[1406]: I1227 09:35:09.441939    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzlwf\" (UniqueName: \"kubernetes.io/projected/27e89ec3-23a4-4ea1-a3ac-26f25e43f5ad-kube-api-access-bzlwf\") pod \"busybox\" (UID: \"27e89ec3-23a4-4ea1-a3ac-26f25e43f5ad\") " pod="default/busybox"
	Dec 27 09:35:12 old-k8s-version-094398 kubelet[1406]: I1227 09:35:12.515013    1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.6075835170000001 podCreationTimestamp="2025-12-27 09:35:09 +0000 UTC" firstStartedPulling="2025-12-27 09:35:09.711086919 +0000 UTC m=+30.460569537" lastFinishedPulling="2025-12-27 09:35:11.618457381 +0000 UTC m=+32.367940003" observedRunningTime="2025-12-27 09:35:12.514605322 +0000 UTC m=+33.264087969" watchObservedRunningTime="2025-12-27 09:35:12.514953983 +0000 UTC m=+33.264436610"
	
	
	==> storage-provisioner [ef6026e39af97a3128fdaba314106c14d7d995e3293d2c0afb4a920e65fc5881] <==
	I1227 09:35:06.389931       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:35:06.399562       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:35:06.399663       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 09:35:06.407716       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:35:06.407960       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-094398_0a728eca-ad1e-491e-bdd2-ff6720c68534!
	I1227 09:35:06.408034       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46ce4ee8-5463-4b1a-acf6-c144e54f0eef", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-094398_0a728eca-ad1e-491e-bdd2-ff6720c68534 became leader
	I1227 09:35:06.508076       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-094398_0a728eca-ad1e-491e-bdd2-ff6720c68534!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094398 -n old-k8s-version-094398
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-094398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-912564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-912564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (275.47787ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:35:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-912564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-912564 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-912564 describe deploy/metrics-server -n kube-system: exit status 1 (72.509953ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-912564 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-912564
helpers_test.go:244: (dbg) docker inspect embed-certs-912564:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8",
	        "Created": "2025-12-27T09:35:13.90835085Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 605930,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:35:13.946668908Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/hosts",
	        "LogPath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8-json.log",
	        "Name": "/embed-certs-912564",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-912564:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-912564",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8",
	                "LowerDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996/merged",
	                "UpperDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996/diff",
	                "WorkDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-912564",
	                "Source": "/var/lib/docker/volumes/embed-certs-912564/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-912564",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-912564",
	                "name.minikube.sigs.k8s.io": "embed-certs-912564",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "807d74f137287b9a51580e5ccd16ef07a65696f776d5c6b894c54d5ecbcbe25d",
	            "SandboxKey": "/var/run/docker/netns/807d74f13728",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-912564": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a8636b7bd1bb3a484b5591e16629c2b067fb4955cb3fcafbd69f576a7b19eb9b",
	                    "EndpointID": "2abf789a839dc7b5bbdf13072562a249c454b73fa293b2999f344e26363ead9a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "92:e2:72:cf:92:bd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-912564",
	                        "d1131cb70c56"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-912564 -n embed-certs-912564
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-912564 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-912564 logs -n 25: (1.056799518s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p running-upgrade-561421 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                                          │ running-upgrade-561421       │ jenkins │ v1.35.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:33 UTC │
	│ start   │ -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-761172    │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │                     │
	│ start   │ -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-761172    │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:33 UTC │
	│ delete  │ -p kubernetes-upgrade-761172                                                                                                                                                                                                                  │ kubernetes-upgrade-761172    │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:33 UTC │
	│ start   │ -p test-preload-805186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio                                                                                                                  │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p running-upgrade-561421 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-561421       │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:34 UTC │
	│ delete  │ -p running-upgrade-561421                                                                                                                                                                                                                     │ running-upgrade-561421       │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:35 UTC │
	│ image   │ test-preload-805186 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                                                                                                                   │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ stop    │ -p test-preload-805186                                                                                                                                                                                                                        │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p test-preload-805186 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p cert-expiration-237269 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-237269       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p cert-expiration-237269                                                                                                                                                                                                                     │ cert-expiration-237269       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-094398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p old-k8s-version-094398 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ image   │ test-preload-805186 image list                                                                                                                                                                                                                │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p test-preload-805186                                                                                                                                                                                                                        │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-917808                                                                                                                                                                                                               │ disable-driver-mounts-917808 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-094398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ delete  │ -p stopped-upgrade-196124                                                                                                                                                                                                                     │ stopped-upgrade-196124       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-912564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:35:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:35:46.755689  616179 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:35:46.756480  616179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:35:46.756497  616179 out.go:374] Setting ErrFile to fd 2...
	I1227 09:35:46.756508  616179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:35:46.756885  616179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:35:46.757494  616179 out.go:368] Setting JSON to false
	I1227 09:35:46.758988  616179 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4691,"bootTime":1766823456,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:35:46.759063  616179 start.go:143] virtualization: kvm guest
	I1227 09:35:46.760205  616179 out.go:179] * [default-k8s-diff-port-497722] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:35:46.761139  616179 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:35:46.761174  616179 notify.go:221] Checking for updates...
	I1227 09:35:46.763256  616179 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:35:46.767222  616179 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:35:46.768207  616179 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:35:46.770393  616179 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:35:46.771781  616179 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:35:45.499778  613189 cli_runner.go:164] Run: docker network inspect old-k8s-version-094398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:35:45.522612  613189 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:35:45.528031  613189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:35:45.541662  613189 kubeadm.go:884] updating cluster {Name:old-k8s-version-094398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-094398 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:35:45.541812  613189 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:35:45.541880  613189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:35:45.579386  613189 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:35:45.579412  613189 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:35:45.579471  613189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:35:45.608094  613189 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:35:45.608123  613189 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:35:45.608132  613189 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1227 09:35:45.608266  613189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-094398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-094398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:35:45.608360  613189 ssh_runner.go:195] Run: crio config
	I1227 09:35:45.657007  613189 cni.go:84] Creating CNI manager for ""
	I1227 09:35:45.657033  613189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:35:45.657057  613189 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:35:45.657082  613189 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-094398 NodeName:old-k8s-version-094398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:35:45.657211  613189 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-094398"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:35:45.657273  613189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1227 09:35:45.666851  613189 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:35:45.666917  613189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:35:45.675091  613189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1227 09:35:45.688843  613189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:35:45.702073  613189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1227 09:35:45.714646  613189 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:35:45.718025  613189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:35:45.727603  613189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:35:45.816309  613189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:35:45.844407  613189 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/old-k8s-version-094398 for IP: 192.168.76.2
	I1227 09:35:45.844433  613189 certs.go:195] generating shared ca certs ...
	I1227 09:35:45.844454  613189 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:45.844632  613189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:35:45.844696  613189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:35:45.844713  613189 certs.go:257] generating profile certs ...
	I1227 09:35:45.844879  613189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/old-k8s-version-094398/client.key
	I1227 09:35:45.844964  613189 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/old-k8s-version-094398/apiserver.key.a775617b
	I1227 09:35:45.845046  613189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/old-k8s-version-094398/proxy-client.key
	I1227 09:35:45.845196  613189 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:35:45.845254  613189 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:35:45.845269  613189 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:35:45.845314  613189 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:35:45.845353  613189 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:35:45.845388  613189 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:35:45.845451  613189 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:35:45.846517  613189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:35:45.873815  613189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:35:45.896197  613189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:35:45.917513  613189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:35:45.943363  613189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/old-k8s-version-094398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:35:45.964765  613189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/old-k8s-version-094398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:35:46.022831  613189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/old-k8s-version-094398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:35:46.042871  613189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/old-k8s-version-094398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:35:46.062484  613189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:35:46.081104  613189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:35:46.097994  613189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:35:46.115533  613189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:35:46.127987  613189 ssh_runner.go:195] Run: openssl version
	I1227 09:35:46.134213  613189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:35:46.142109  613189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:35:46.149271  613189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:35:46.153035  613189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:35:46.153085  613189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:35:46.192311  613189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:35:46.200046  613189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:46.207661  613189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:35:46.215364  613189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:46.219273  613189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:46.219326  613189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:46.257259  613189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:35:46.265448  613189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:35:46.273347  613189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:35:46.281456  613189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:35:46.285182  613189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:35:46.285236  613189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:35:46.320328  613189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:35:46.327961  613189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:35:46.332395  613189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:35:46.377566  613189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:35:46.439983  613189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:35:46.502031  613189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:35:46.570314  613189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:35:46.634767  613189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:35:46.696637  613189 kubeadm.go:401] StartCluster: {Name:old-k8s-version-094398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-094398 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:35:46.696826  613189 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:35:46.696915  613189 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:35:46.744039  613189 cri.go:96] found id: "c5fc71eb798e5068dea2558184fc2f1324dfde7c0fb1d8eb63ec2e35afe24f87"
	I1227 09:35:46.744111  613189 cri.go:96] found id: "e5693fba043840c8a3d1117c5220be21e1cfd4a801e563c4512c5828ce4adbcd"
	I1227 09:35:46.744190  613189 cri.go:96] found id: "8223e42bf97a022ceca335fd381b6b6c3aeac7c607125c2e1cbf3b803876c7ad"
	I1227 09:35:46.744216  613189 cri.go:96] found id: "9f6ac155f6a42bf5c5966c64219e0b13256e12d474b8bdc4feb2e3846eeca31d"
	I1227 09:35:46.744236  613189 cri.go:96] found id: ""
	I1227 09:35:46.744311  613189 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:35:46.767509  613189 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:35:46Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:35:46.767575  613189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:35:46.791342  613189 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:35:46.791423  613189 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:35:46.791499  613189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:35:46.803343  613189 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:35:46.805446  613189 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-094398" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:35:46.806319  613189 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-094398" cluster setting kubeconfig missing "old-k8s-version-094398" context setting]
	I1227 09:35:46.807502  613189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:46.809992  613189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:35:46.821184  613189 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 09:35:46.821221  613189 kubeadm.go:602] duration metric: took 29.777966ms to restartPrimaryControlPlane
	I1227 09:35:46.821234  613189 kubeadm.go:403] duration metric: took 124.606917ms to StartCluster
	I1227 09:35:46.821256  613189 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:46.821320  613189 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:35:46.823283  613189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:46.823714  613189 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:35:46.824018  613189 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:35:46.824047  613189 config.go:182] Loaded profile config "old-k8s-version-094398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 09:35:46.824156  613189 addons.go:70] Setting dashboard=true in profile "old-k8s-version-094398"
	I1227 09:35:46.824171  613189 addons.go:239] Setting addon dashboard=true in "old-k8s-version-094398"
	W1227 09:35:46.824180  613189 addons.go:248] addon dashboard should already be in state true
	I1227 09:35:46.824197  613189 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-094398"
	I1227 09:35:46.824215  613189 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-094398"
	I1227 09:35:46.824145  613189 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-094398"
	I1227 09:35:46.824216  613189 host.go:66] Checking if "old-k8s-version-094398" exists ...
	I1227 09:35:46.824458  613189 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-094398"
	W1227 09:35:46.824468  613189 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:35:46.824502  613189 host.go:66] Checking if "old-k8s-version-094398" exists ...
	I1227 09:35:46.824963  613189 cli_runner.go:164] Run: docker container inspect old-k8s-version-094398 --format={{.State.Status}}
	I1227 09:35:46.825098  613189 cli_runner.go:164] Run: docker container inspect old-k8s-version-094398 --format={{.State.Status}}
	I1227 09:35:46.825259  613189 cli_runner.go:164] Run: docker container inspect old-k8s-version-094398 --format={{.State.Status}}
	I1227 09:35:46.829203  613189 out.go:179] * Verifying Kubernetes components...
	I1227 09:35:46.830384  613189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:35:46.881620  613189 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-094398"
	W1227 09:35:46.881654  613189 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:35:46.881689  613189 host.go:66] Checking if "old-k8s-version-094398" exists ...
	I1227 09:35:46.882235  613189 cli_runner.go:164] Run: docker container inspect old-k8s-version-094398 --format={{.State.Status}}
	I1227 09:35:46.889077  613189 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 09:35:46.889281  613189 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:35:46.890560  613189 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:35:46.773371  616179 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:35:46.773524  616179 config.go:182] Loaded profile config "no-preload-963457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:35:46.773640  616179 config.go:182] Loaded profile config "old-k8s-version-094398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 09:35:46.773744  616179 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:35:46.816026  616179 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:35:46.816136  616179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:35:46.956365  616179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-27 09:35:46.933090412 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:35:46.957451  616179 docker.go:319] overlay module found
	I1227 09:35:46.960591  616179 out.go:179] * Using the docker driver based on user configuration
	I1227 09:35:46.961659  616179 start.go:309] selected driver: docker
	I1227 09:35:46.961823  616179 start.go:928] validating driver "docker" against <nil>
	I1227 09:35:46.961975  616179 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:35:46.963318  616179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:35:47.069898  616179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-27 09:35:47.055389574 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:35:47.070313  616179 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:35:47.070786  616179 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:35:47.072197  616179 out.go:179] * Using Docker driver with root privileges
	I1227 09:35:47.073088  616179 cni.go:84] Creating CNI manager for ""
	I1227 09:35:47.073184  616179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:35:47.073195  616179 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:35:47.073271  616179 start.go:353] cluster config:
	{Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:35:47.074516  616179 out.go:179] * Starting "default-k8s-diff-port-497722" primary control-plane node in "default-k8s-diff-port-497722" cluster
	I1227 09:35:47.075583  616179 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:35:47.076449  616179 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:35:47.077208  616179 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:35:47.077239  616179 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:35:47.077262  616179 cache.go:65] Caching tarball of preloaded images
	I1227 09:35:47.077316  616179 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:35:47.077356  616179 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:35:47.077367  616179 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:35:47.077492  616179 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/config.json ...
	I1227 09:35:47.077516  616179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/config.json: {Name:mk3f614642367998924f39c50832497a4ff490dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:47.113646  616179 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:35:47.113719  616179 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:35:47.113740  616179 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:35:47.113831  616179 start.go:360] acquireMachinesLock for default-k8s-diff-port-497722: {Name:mk952cc47ec82ed9310014186e6e4270fbb3e58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:35:47.113983  616179 start.go:364] duration metric: took 122.345µs to acquireMachinesLock for "default-k8s-diff-port-497722"
	I1227 09:35:47.114011  616179 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:35:47.114083  616179 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:35:46.890712  613189 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:35:46.890753  613189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:35:46.890939  613189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-094398
	I1227 09:35:46.894509  613189 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:35:46.894530  613189 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:35:46.894587  613189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-094398
	I1227 09:35:46.926879  613189 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:35:46.926906  613189 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:35:46.926968  613189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-094398
	I1227 09:35:46.933586  613189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/old-k8s-version-094398/id_rsa Username:docker}
	I1227 09:35:46.953559  613189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/old-k8s-version-094398/id_rsa Username:docker}
	I1227 09:35:46.965923  613189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/old-k8s-version-094398/id_rsa Username:docker}
	I1227 09:35:47.052815  613189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:35:47.082016  613189 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-094398" to be "Ready" ...
	I1227 09:35:47.113992  613189 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:35:47.114014  613189 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:35:47.126003  613189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:35:47.136835  613189 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:35:47.136858  613189 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:35:47.147609  613189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:35:47.159602  613189 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:35:47.159627  613189 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:35:47.179433  613189 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:35:47.179492  613189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:35:47.198881  613189 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:35:47.198906  613189 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:35:47.222743  613189 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:35:47.222771  613189 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:35:47.251359  613189 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:35:47.251387  613189 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:35:47.267205  613189 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:35:47.267232  613189 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:35:47.283873  613189 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:35:47.283894  613189 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:35:47.301059  613189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:35:48.910360  613189 node_ready.go:49] node "old-k8s-version-094398" is "Ready"
	I1227 09:35:48.910400  613189 node_ready.go:38] duration metric: took 1.828311822s for node "old-k8s-version-094398" to be "Ready" ...
	I1227 09:35:48.910420  613189 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:35:48.910486  613189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:35:49.822616  613189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.696571117s)
	I1227 09:35:49.822702  613189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.675015884s)
	I1227 09:35:50.214102  613189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.91299318s)
	I1227 09:35:50.214148  613189 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.303635904s)
	I1227 09:35:50.214181  613189 api_server.go:72] duration metric: took 3.390083983s to wait for apiserver process to appear ...
	I1227 09:35:50.214192  613189 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:35:50.214259  613189 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:35:50.215330  613189 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-094398 addons enable metrics-server
	
	I1227 09:35:50.216289  613189 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1227 09:35:45.592534  610436 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.268806896s)
	I1227 09:35:45.592563  610436 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22343-373581/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I1227 09:35:45.592592  610436 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 09:35:45.592647  610436 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 09:35:47.791892  610436 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (2.199210814s)
	I1227 09:35:47.791937  610436 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22343-373581/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I1227 09:35:47.791982  610436 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1227 09:35:47.792048  610436 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1227 09:35:48.493740  610436 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22343-373581/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1227 09:35:48.493814  610436 cache_images.go:125] Successfully loaded all cached images
	I1227 09:35:48.493826  610436 cache_images.go:94] duration metric: took 10.851306524s to LoadCachedImages
	I1227 09:35:48.493847  610436 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 09:35:48.493977  610436 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-963457 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-963457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:35:48.494078  610436 ssh_runner.go:195] Run: crio config
	I1227 09:35:48.563184  610436 cni.go:84] Creating CNI manager for ""
	I1227 09:35:48.563209  610436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:35:48.563226  610436 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:35:48.563257  610436 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-963457 NodeName:no-preload-963457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:35:48.563454  610436 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-963457"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:35:48.563533  610436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:35:48.573047  610436 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1227 09:35:48.573117  610436 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1227 09:35:48.581696  610436 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
	I1227 09:35:48.581724  610436 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm.sha256
	I1227 09:35:48.581698  610436 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet.sha256
	I1227 09:35:48.581839  610436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1227 09:35:48.581876  610436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:35:48.581815  610436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1227 09:35:48.587751  610436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1227 09:35:48.587776  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/cache/linux/amd64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (72368312 bytes)
	I1227 09:35:48.603082  610436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1227 09:35:48.603109  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/cache/linux/amd64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (58597560 bytes)
	I1227 09:35:48.603116  610436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1227 09:35:48.616984  610436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1227 09:35:48.617024  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/cache/linux/amd64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (58110244 bytes)
	I1227 09:35:49.175587  610436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:35:49.185007  610436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 09:35:49.200734  610436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:35:49.228982  610436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1227 09:35:49.244382  610436 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:35:49.249999  610436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:35:49.265296  610436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:35:49.397010  610436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:35:49.426716  610436 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457 for IP: 192.168.85.2
	I1227 09:35:49.426816  610436 certs.go:195] generating shared ca certs ...
	I1227 09:35:49.426850  610436 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:49.427052  610436 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:35:49.427132  610436 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:35:49.427148  610436 certs.go:257] generating profile certs ...
	I1227 09:35:49.427230  610436 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/client.key
	I1227 09:35:49.427260  610436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/client.crt with IP's: []
	I1227 09:35:49.537370  610436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/client.crt ...
	I1227 09:35:49.537412  610436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/client.crt: {Name:mkdbde4cd969035c55c4715dd94fc928e132b31c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:49.537587  610436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/client.key ...
	I1227 09:35:49.537607  610436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/client.key: {Name:mk16dd0772d294c8576b2b539e762469107e8b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:49.537711  610436 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key.7eac886d
	I1227 09:35:49.537736  610436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.crt.7eac886d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 09:35:49.657990  610436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.crt.7eac886d ...
	I1227 09:35:49.658041  610436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.crt.7eac886d: {Name:mkb277a0c9634672efc8db6e3345ed07f91729d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:49.658309  610436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key.7eac886d ...
	I1227 09:35:49.658339  610436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key.7eac886d: {Name:mk6bcb6a2440d29f752a09e7d2bf11c0cc97e4c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:49.658459  610436 certs.go:382] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.crt.7eac886d -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.crt
	I1227 09:35:49.658559  610436 certs.go:386] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key.7eac886d -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key
	I1227 09:35:49.658636  610436 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.key
	I1227 09:35:49.658659  610436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.crt with IP's: []
	I1227 09:35:49.769115  610436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.crt ...
	I1227 09:35:49.769154  610436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.crt: {Name:mkada48b73843f982e1c00841d02331163007d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:49.769359  610436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.key ...
	I1227 09:35:49.769388  610436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.key: {Name:mkcb8bf6813f7743397453ca53ce4e2dfb1a12d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:35:49.769674  610436 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:35:49.769736  610436 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:35:49.769754  610436 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:35:49.769810  610436 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:35:49.769853  610436 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:35:49.769892  610436 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:35:49.769965  610436 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:35:49.770964  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:35:49.794720  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:35:49.816691  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:35:49.837945  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:35:49.860373  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 09:35:49.882969  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:35:49.905977  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:35:49.929221  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:35:49.951621  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:35:49.975132  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:35:49.999167  610436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:35:50.025404  610436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:35:50.043451  610436 ssh_runner.go:195] Run: openssl version
	I1227 09:35:50.053061  610436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:50.063346  610436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:35:50.076673  610436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:50.081682  610436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:50.081754  610436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:35:50.126389  610436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:35:50.135485  610436 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:35:50.144898  610436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:35:50.154644  610436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:35:50.164642  610436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:35:50.170005  610436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:35:50.170091  610436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:35:50.226782  610436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:35:50.237171  610436 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/377171.pem /etc/ssl/certs/51391683.0
	I1227 09:35:50.246184  610436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:35:50.255406  610436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:35:50.265786  610436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:35:50.270043  610436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:35:50.270086  610436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:35:50.308205  610436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:35:50.316361  610436 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3771712.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:35:50.324341  610436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:35:50.328090  610436 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:35:50.328151  610436 kubeadm.go:401] StartCluster: {Name:no-preload-963457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-963457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:35:50.328249  610436 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:35:50.328305  610436 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:35:50.356695  610436 cri.go:96] found id: ""
	I1227 09:35:50.356821  610436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:35:50.365169  610436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:35:50.373345  610436 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:35:50.373412  610436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:35:50.381051  610436 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:35:50.381073  610436 kubeadm.go:158] found existing configuration files:
	
	I1227 09:35:50.381118  610436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:35:50.388831  610436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:35:50.388884  610436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:35:50.396384  610436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:35:50.403872  610436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:35:50.403951  610436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:35:50.411680  610436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:35:50.419420  610436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:35:50.419482  610436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:35:50.426725  610436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:35:50.434353  610436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:35:50.434412  610436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:35:50.441832  610436 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:35:50.541545  610436 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 09:35:47.115594  616179 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:35:47.116406  616179 start.go:159] libmachine.API.Create for "default-k8s-diff-port-497722" (driver="docker")
	I1227 09:35:47.116451  616179 client.go:173] LocalClient.Create starting
	I1227 09:35:47.116545  616179 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:35:47.116586  616179 main.go:144] libmachine: Decoding PEM data...
	I1227 09:35:47.116608  616179 main.go:144] libmachine: Parsing certificate...
	I1227 09:35:47.116697  616179 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:35:47.116731  616179 main.go:144] libmachine: Decoding PEM data...
	I1227 09:35:47.116745  616179 main.go:144] libmachine: Parsing certificate...
	I1227 09:35:47.117190  616179 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-497722 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:35:47.143624  616179 cli_runner.go:211] docker network inspect default-k8s-diff-port-497722 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:35:47.143768  616179 network_create.go:284] running [docker network inspect default-k8s-diff-port-497722] to gather additional debugging logs...
	I1227 09:35:47.144478  616179 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-497722
	W1227 09:35:47.170808  616179 cli_runner.go:211] docker network inspect default-k8s-diff-port-497722 returned with exit code 1
	I1227 09:35:47.170842  616179 network_create.go:287] error running [docker network inspect default-k8s-diff-port-497722]: docker network inspect default-k8s-diff-port-497722: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-497722 not found
	I1227 09:35:47.170858  616179 network_create.go:289] output of [docker network inspect default-k8s-diff-port-497722]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-497722 not found
	
	** /stderr **
	I1227 09:35:47.170979  616179 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:35:47.196622  616179 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
	I1227 09:35:47.198054  616179 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-21a699476be6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:e8:d9:95:e6:36} reservation:<nil>}
	I1227 09:35:47.198881  616179 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e97c5356905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:d9:6b:42:f5:e3} reservation:<nil>}
	I1227 09:35:47.199874  616179 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0ba531636d5b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:e4:d5:d9:cd:a6} reservation:<nil>}
	I1227 09:35:47.200832  616179 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e27fce9ec482 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:8a:0d:7f:bc:3d} reservation:<nil>}
	I1227 09:35:47.201729  616179 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-a8636b7bd1bb IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:e2:59:64:97:08:c7} reservation:<nil>}
	I1227 09:35:47.203017  616179 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dda1a0}
	I1227 09:35:47.203097  616179 network_create.go:124] attempt to create docker network default-k8s-diff-port-497722 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1227 09:35:47.203193  616179 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-497722 default-k8s-diff-port-497722
	I1227 09:35:47.272353  616179 network_create.go:108] docker network default-k8s-diff-port-497722 192.168.103.0/24 created
	I1227 09:35:47.272384  616179 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-497722" container
	I1227 09:35:47.272461  616179 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:35:47.297908  616179 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-497722 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-497722 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:35:47.323868  616179 oci.go:103] Successfully created a docker volume default-k8s-diff-port-497722
	I1227 09:35:47.323970  616179 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-497722-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-497722 --entrypoint /usr/bin/test -v default-k8s-diff-port-497722:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:35:48.150576  616179 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-497722
	I1227 09:35:48.150667  616179 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:35:48.150686  616179 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:35:48.150750  616179 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-497722:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:35:50.217148  613189 addons.go:530] duration metric: took 3.39313725s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 09:35:50.218982  613189 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 09:35:50.220348  613189 api_server.go:141] control plane version: v1.28.0
	I1227 09:35:50.220375  613189 api_server.go:131] duration metric: took 6.134642ms to wait for apiserver health ...
	I1227 09:35:50.220385  613189 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:35:50.223747  613189 system_pods.go:59] 8 kube-system pods found
	I1227 09:35:50.223785  613189 system_pods.go:61] "coredns-5dd5756b68-l2f7v" [f7c50be2-905f-4271-9c61-b65f6ae61096] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:35:50.223816  613189 system_pods.go:61] "etcd-old-k8s-version-094398" [ebeb97e8-4206-4f70-b630-d7ecc9f75950] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:35:50.223827  613189 system_pods.go:61] "kindnet-hb4bf" [f15b8136-82bb-46a5-942e-b7a9f2d21526] Running
	I1227 09:35:50.223843  613189 system_pods.go:61] "kube-apiserver-old-k8s-version-094398" [e3f4a020-e25c-4cfd-9d7e-747fcca4a5c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:35:50.223855  613189 system_pods.go:61] "kube-controller-manager-old-k8s-version-094398" [567a6fe5-584e-4d6b-bc49-21b98b43bfb7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:35:50.223864  613189 system_pods.go:61] "kube-proxy-w8h4h" [447cf446-97af-453a-852e-6e459a39939e] Running
	I1227 09:35:50.223873  613189 system_pods.go:61] "kube-scheduler-old-k8s-version-094398" [97e045a8-3831-4ed3-b195-f67c7fbb2043] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:35:50.223879  613189 system_pods.go:61] "storage-provisioner" [ed7a9a14-2a15-40f9-ac80-e99cc2704e98] Running
	I1227 09:35:50.223886  613189 system_pods.go:74] duration metric: took 3.495285ms to wait for pod list to return data ...
	I1227 09:35:50.223894  613189 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:35:50.225986  613189 default_sa.go:45] found service account: "default"
	I1227 09:35:50.226004  613189 default_sa.go:55] duration metric: took 2.104679ms for default service account to be created ...
	I1227 09:35:50.226012  613189 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:35:50.229568  613189 system_pods.go:86] 8 kube-system pods found
	I1227 09:35:50.229596  613189 system_pods.go:89] "coredns-5dd5756b68-l2f7v" [f7c50be2-905f-4271-9c61-b65f6ae61096] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:35:50.229607  613189 system_pods.go:89] "etcd-old-k8s-version-094398" [ebeb97e8-4206-4f70-b630-d7ecc9f75950] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:35:50.229615  613189 system_pods.go:89] "kindnet-hb4bf" [f15b8136-82bb-46a5-942e-b7a9f2d21526] Running
	I1227 09:35:50.229624  613189 system_pods.go:89] "kube-apiserver-old-k8s-version-094398" [e3f4a020-e25c-4cfd-9d7e-747fcca4a5c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:35:50.229633  613189 system_pods.go:89] "kube-controller-manager-old-k8s-version-094398" [567a6fe5-584e-4d6b-bc49-21b98b43bfb7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:35:50.229644  613189 system_pods.go:89] "kube-proxy-w8h4h" [447cf446-97af-453a-852e-6e459a39939e] Running
	I1227 09:35:50.229653  613189 system_pods.go:89] "kube-scheduler-old-k8s-version-094398" [97e045a8-3831-4ed3-b195-f67c7fbb2043] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:35:50.229663  613189 system_pods.go:89] "storage-provisioner" [ed7a9a14-2a15-40f9-ac80-e99cc2704e98] Running
	I1227 09:35:50.229672  613189 system_pods.go:126] duration metric: took 3.653337ms to wait for k8s-apps to be running ...
	I1227 09:35:50.229684  613189 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:35:50.229730  613189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:35:50.243437  613189 system_svc.go:56] duration metric: took 13.747266ms WaitForService to wait for kubelet
	I1227 09:35:50.243465  613189 kubeadm.go:587] duration metric: took 3.41936711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:35:50.243487  613189 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:35:50.246667  613189 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:35:50.246690  613189 node_conditions.go:123] node cpu capacity is 8
	I1227 09:35:50.246703  613189 node_conditions.go:105] duration metric: took 3.210875ms to run NodePressure ...
	I1227 09:35:50.246716  613189 start.go:242] waiting for startup goroutines ...
	I1227 09:35:50.246723  613189 start.go:247] waiting for cluster config update ...
	I1227 09:35:50.246732  613189 start.go:256] writing updated cluster config ...
	I1227 09:35:50.246996  613189 ssh_runner.go:195] Run: rm -f paused
	I1227 09:35:50.251365  613189 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:35:50.256782  613189 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-l2f7v" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:35:52.261999  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:35:50.605067  610436 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:35:53.070639  616179 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-497722:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.919830111s)
	I1227 09:35:53.070675  616179 kic.go:203] duration metric: took 4.91998444s to extract preloaded images to volume ...
	W1227 09:35:53.070749  616179 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:35:53.070782  616179 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:35:53.070852  616179 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:35:53.125205  616179 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-497722 --name default-k8s-diff-port-497722 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-497722 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-497722 --network default-k8s-diff-port-497722 --ip 192.168.103.2 --volume default-k8s-diff-port-497722:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:35:53.397599  616179 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Running}}
	I1227 09:35:53.420602  616179 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:35:53.444542  616179 cli_runner.go:164] Run: docker exec default-k8s-diff-port-497722 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:35:53.505229  616179 oci.go:144] the created container "default-k8s-diff-port-497722" has a running status.
	I1227 09:35:53.505270  616179 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa...
	I1227 09:35:53.618716  616179 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:35:53.651332  616179 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:35:53.677082  616179 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:35:53.677107  616179 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-497722 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:35:53.744936  616179 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:35:53.770151  616179 machine.go:94] provisionDockerMachine start ...
	I1227 09:35:53.770248  616179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:35:53.798410  616179 main.go:144] libmachine: Using SSH client type: native
	I1227 09:35:53.798785  616179 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 09:35:53.798830  616179 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:35:53.937617  616179 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-497722
	
	I1227 09:35:53.937648  616179 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-497722"
	I1227 09:35:53.937702  616179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:35:53.957023  616179 main.go:144] libmachine: Using SSH client type: native
	I1227 09:35:53.957308  616179 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 09:35:53.957330  616179 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-497722 && echo "default-k8s-diff-port-497722" | sudo tee /etc/hostname
	I1227 09:35:54.094863  616179 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-497722
	
	I1227 09:35:54.094956  616179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:35:54.114807  616179 main.go:144] libmachine: Using SSH client type: native
	I1227 09:35:54.115082  616179 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 09:35:54.115123  616179 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-497722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-497722/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-497722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:35:54.240959  616179 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:35:54.240990  616179 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:35:54.241019  616179 ubuntu.go:190] setting up certificates
	I1227 09:35:54.241033  616179 provision.go:84] configureAuth start
	I1227 09:35:54.241103  616179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:35:54.261492  616179 provision.go:143] copyHostCerts
	I1227 09:35:54.261556  616179 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:35:54.261587  616179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:35:54.261673  616179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:35:54.261787  616179 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:35:54.261831  616179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:35:54.261878  616179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:35:54.261972  616179 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:35:54.261986  616179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:35:54.262026  616179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:35:54.262098  616179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-497722 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-497722 localhost minikube]
	I1227 09:35:54.448512  616179 provision.go:177] copyRemoteCerts
	I1227 09:35:54.448581  616179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:35:54.448629  616179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:35:54.469441  616179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:35:54.565325  616179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:35:54.588938  616179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 09:35:54.609938  616179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:35:54.631540  616179 provision.go:87] duration metric: took 390.483875ms to configureAuth
	I1227 09:35:54.631569  616179 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:35:54.631750  616179 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:35:54.631901  616179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:35:54.651179  616179 main.go:144] libmachine: Using SSH client type: native
	I1227 09:35:54.651557  616179 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 09:35:54.651584  616179 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:35:54.921462  616179 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:35:54.921490  616179 machine.go:97] duration metric: took 1.151314885s to provisionDockerMachine
	I1227 09:35:54.921522  616179 client.go:176] duration metric: took 7.805045425s to LocalClient.Create
	I1227 09:35:54.921551  616179 start.go:167] duration metric: took 7.805146486s to libmachine.API.Create "default-k8s-diff-port-497722"
	I1227 09:35:54.921565  616179 start.go:293] postStartSetup for "default-k8s-diff-port-497722" (driver="docker")
	I1227 09:35:54.921578  616179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:35:54.921657  616179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:35:54.921719  616179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:35:54.945906  616179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:35:55.042808  616179 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:35:55.046535  616179 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:35:55.046570  616179 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:35:55.046584  616179 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:35:55.046644  616179 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:35:55.046762  616179 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:35:55.046948  616179 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:35:55.054738  616179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:35:55.080244  616179 start.go:296] duration metric: took 158.662938ms for postStartSetup
	I1227 09:35:55.080621  616179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:35:55.102884  616179 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/config.json ...
	I1227 09:35:55.103172  616179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:35:55.103226  616179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:35:55.144663  616179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:35:55.238237  616179 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:35:55.244395  616179 start.go:128] duration metric: took 8.130282182s to createHost
	I1227 09:35:55.244427  616179 start.go:83] releasing machines lock for "default-k8s-diff-port-497722", held for 8.130430933s
	I1227 09:35:55.244541  616179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:35:55.267114  616179 ssh_runner.go:195] Run: cat /version.json
	I1227 09:35:55.267177  616179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:35:55.267181  616179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:35:55.267270  616179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:35:55.288013  616179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:35:55.289529  616179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:35:55.379129  616179 ssh_runner.go:195] Run: systemctl --version
	I1227 09:35:55.436500  616179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:35:55.471598  616179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:35:55.476400  616179 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:35:55.476453  616179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:35:55.499954  616179 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 09:35:55.499975  616179 start.go:496] detecting cgroup driver to use...
	I1227 09:35:55.500002  616179 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:35:55.500038  616179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:35:55.515411  616179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:35:55.526993  616179 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:35:55.527054  616179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:35:55.543006  616179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:35:55.559609  616179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:35:55.640897  616179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:35:55.732050  616179 docker.go:234] disabling docker service ...
	I1227 09:35:55.732133  616179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:35:55.750645  616179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:35:55.763825  616179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:35:55.849242  616179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:35:55.926742  616179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:35:55.938684  616179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:35:55.952099  616179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:35:55.952161  616179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:55.962873  616179 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:35:55.962939  616179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:55.971926  616179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:55.980086  616179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:55.988523  616179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:35:55.996132  616179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:56.004442  616179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:56.017411  616179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:35:56.025757  616179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:35:56.032722  616179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:35:56.040041  616179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:35:56.127055  616179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:35:56.290546  616179 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:35:56.290619  616179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:35:56.295463  616179 start.go:574] Will wait 60s for crictl version
	I1227 09:35:56.295518  616179 ssh_runner.go:195] Run: which crictl
	I1227 09:35:56.300182  616179 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:35:56.336579  616179 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:35:56.336673  616179 ssh_runner.go:195] Run: crio --version
	I1227 09:35:56.374788  616179 ssh_runner.go:195] Run: crio --version
	I1227 09:35:56.411994  616179 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:35:56.413064  616179 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-497722 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:35:56.434875  616179 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 09:35:56.440214  616179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:35:56.452386  616179 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:35:56.452541  616179 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:35:56.452606  616179 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:35:56.494670  616179 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:35:56.494693  616179 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:35:56.494751  616179 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:35:56.524140  616179 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:35:56.524162  616179 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:35:56.524175  616179 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.35.0 crio true true} ...
	I1227 09:35:56.524265  616179 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-497722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:35:56.524340  616179 ssh_runner.go:195] Run: crio config
	I1227 09:35:56.589833  616179 cni.go:84] Creating CNI manager for ""
	I1227 09:35:56.589857  616179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:35:56.589873  616179 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:35:56.589906  616179 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-497722 NodeName:default-k8s-diff-port-497722 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:35:56.590070  616179 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-497722"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:35:56.590148  616179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:35:56.600565  616179 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:35:56.600635  616179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:35:56.610722  616179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1227 09:35:56.627280  616179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:35:56.649363  616179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1227 09:35:56.666694  616179 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:35:56.671512  616179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:35:56.684648  616179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	
	
	==> CRI-O <==
	Dec 27 09:35:43 embed-certs-912564 crio[778]: time="2025-12-27T09:35:43.949258355Z" level=info msg="Starting container: 6f250bac3e4b351b9d89f1eba4b4624a4fefa86f182a7105180b05d7b4293ce4" id=f98b4799-8db9-4176-9fd9-e824de2db720 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:35:43 embed-certs-912564 crio[778]: time="2025-12-27T09:35:43.951621407Z" level=info msg="Started container" PID=1890 containerID=6f250bac3e4b351b9d89f1eba4b4624a4fefa86f182a7105180b05d7b4293ce4 description=kube-system/coredns-7d764666f9-vm5hp/coredns id=f98b4799-8db9-4176-9fd9-e824de2db720 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6f05573bb1675acb48494e683085cf3f446e213f153c8c170b22a98eb51ca2b
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.455893731Z" level=info msg="Running pod sandbox: default/busybox/POD" id=20132942-1881-4896-9491-f493d8a33bca name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.455977424Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.558826818Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:77208432e27208aadedb456e97f58ee44246ab7b56e03951e265b2543d1cccf8 UID:274a708a-11df-4cc3-b67f-309656c1f9c6 NetNS:/var/run/netns/6f9d7139-bb52-43e1-9cde-3a1398150bb0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000496a60}] Aliases:map[]}"
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.558887942Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.569985538Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:77208432e27208aadedb456e97f58ee44246ab7b56e03951e265b2543d1cccf8 UID:274a708a-11df-4cc3-b67f-309656c1f9c6 NetNS:/var/run/netns/6f9d7139-bb52-43e1-9cde-3a1398150bb0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000496a60}] Aliases:map[]}"
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.570110112Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.570948138Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.572064832Z" level=info msg="Ran pod sandbox 77208432e27208aadedb456e97f58ee44246ab7b56e03951e265b2543d1cccf8 with infra container: default/busybox/POD" id=20132942-1881-4896-9491-f493d8a33bca name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.573328078Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bbfc2ddf-0519-4df0-aa87-15381e7eb24d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.573466969Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=bbfc2ddf-0519-4df0-aa87-15381e7eb24d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.573513347Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=bbfc2ddf-0519-4df0-aa87-15381e7eb24d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.574309213Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9371cf6f-f9ee-4dff-a1d1-fafba0c478cd name=/runtime.v1.ImageService/PullImage
	Dec 27 09:35:47 embed-certs-912564 crio[778]: time="2025-12-27T09:35:47.576941491Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 09:35:49 embed-certs-912564 crio[778]: time="2025-12-27T09:35:49.572694288Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9371cf6f-f9ee-4dff-a1d1-fafba0c478cd name=/runtime.v1.ImageService/PullImage
	Dec 27 09:35:49 embed-certs-912564 crio[778]: time="2025-12-27T09:35:49.573761415Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f18637e-421b-45c2-95e9-4c7c80069a59 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:35:49 embed-certs-912564 crio[778]: time="2025-12-27T09:35:49.575956479Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=73ec03c9-bca6-4a73-87c1-9d8ca9469554 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:35:49 embed-certs-912564 crio[778]: time="2025-12-27T09:35:49.585245314Z" level=info msg="Creating container: default/busybox/busybox" id=6e29db3e-aa41-430b-9859-a4e2405bac0a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:35:49 embed-certs-912564 crio[778]: time="2025-12-27T09:35:49.585362839Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:35:49 embed-certs-912564 crio[778]: time="2025-12-27T09:35:49.590430595Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:35:49 embed-certs-912564 crio[778]: time="2025-12-27T09:35:49.591335219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:35:49 embed-certs-912564 crio[778]: time="2025-12-27T09:35:49.633773077Z" level=info msg="Created container a668f9c9fc24b2051e6c5a8aee4918e8545b695f4e519875a91f2d2d6a8a0163: default/busybox/busybox" id=6e29db3e-aa41-430b-9859-a4e2405bac0a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:35:49 embed-certs-912564 crio[778]: time="2025-12-27T09:35:49.634886396Z" level=info msg="Starting container: a668f9c9fc24b2051e6c5a8aee4918e8545b695f4e519875a91f2d2d6a8a0163" id=427a5d05-bdda-4425-970f-a19fd921a5d1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:35:49 embed-certs-912564 crio[778]: time="2025-12-27T09:35:49.638125973Z" level=info msg="Started container" PID=1968 containerID=a668f9c9fc24b2051e6c5a8aee4918e8545b695f4e519875a91f2d2d6a8a0163 description=default/busybox/busybox id=427a5d05-bdda-4425-970f-a19fd921a5d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=77208432e27208aadedb456e97f58ee44246ab7b56e03951e265b2543d1cccf8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a668f9c9fc24b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   77208432e2720       busybox                                      default
	6f250bac3e4b3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   c6f05573bb167       coredns-7d764666f9-vm5hp                     kube-system
	5be5dc993000a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   1816d1903ade8       storage-provisioner                          kube-system
	ddd5d0a9ec973       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   f60eca7d5f9b6       kindnet-bznfn                                kube-system
	0c6af9d0aa5df       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      27 seconds ago      Running             kube-proxy                0                   64e277d87374f       kube-proxy-dv8ch                             kube-system
	0e1f8e1e1379d       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      37 seconds ago      Running             kube-apiserver            0                   4326ecadcd5c3       kube-apiserver-embed-certs-912564            kube-system
	5cdf146d85759       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      37 seconds ago      Running             kube-scheduler            0                   ceb1e7e0ba8d4       kube-scheduler-embed-certs-912564            kube-system
	9eace42310390       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      37 seconds ago      Running             kube-controller-manager   0                   9d6b3f1cb4dc6       kube-controller-manager-embed-certs-912564   kube-system
	192444c268985       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      37 seconds ago      Running             etcd                      0                   95d70271b564e       etcd-embed-certs-912564                      kube-system
	
	
	==> coredns [6f250bac3e4b351b9d89f1eba4b4624a4fefa86f182a7105180b05d7b4293ce4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43086 - 28093 "HINFO IN 3741040803153031302.8348697892137957055. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022810935s
	
	
	==> describe nodes <==
	Name:               embed-certs-912564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-912564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=embed-certs-912564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:35:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-912564
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:35:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:35:55 +0000   Sat, 27 Dec 2025 09:35:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:35:55 +0000   Sat, 27 Dec 2025 09:35:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:35:55 +0000   Sat, 27 Dec 2025 09:35:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:35:55 +0000   Sat, 27 Dec 2025 09:35:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-912564
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                399800ce-3f0c-4a8a-a24c-ac96dc71a9c4
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-vm5hp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-embed-certs-912564                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-bznfn                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-embed-certs-912564             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-embed-certs-912564    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-dv8ch                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-embed-certs-912564             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node embed-certs-912564 event: Registered Node embed-certs-912564 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [192444c2689851e730e2bb86d5ee9e5ccf5455167753c4cb52cae5c12489d3d9] <==
	{"level":"info","ts":"2025-12-27T09:35:21.045956Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:35:21.045968Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:21.045975Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:21.046552Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:35:21.047070Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:35:21.047067Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:embed-certs-912564 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:35:21.047090Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:35:21.047301Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:35:21.047451Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:35:21.047504Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:35:21.047476Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:35:21.047625Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:35:21.047694Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T09:35:21.047931Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T09:35:21.048396Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:35:21.048466Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:35:21.051743Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-27T09:35:21.052073Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:35:43.573258Z","caller":"traceutil/trace.go:172","msg":"trace[1442204851] linearizableReadLoop","detail":"{readStateIndex:421; appliedIndex:421; }","duration":"104.460784ms","start":"2025-12-27T09:35:43.468766Z","end":"2025-12-27T09:35:43.573227Z","steps":["trace[1442204851] 'read index received'  (duration: 104.450519ms)","trace[1442204851] 'applied index is now lower than readState.Index'  (duration: 9.207µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T09:35:43.573428Z","caller":"traceutil/trace.go:172","msg":"trace[1725284288] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"125.192259ms","start":"2025-12-27T09:35:43.448222Z","end":"2025-12-27T09:35:43.573414Z","steps":["trace[1725284288] 'process raft request'  (duration: 125.094465ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T09:35:43.573467Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.659835ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-912564\" limit:1 ","response":"range_response_count:1 size:5488"}
	{"level":"info","ts":"2025-12-27T09:35:43.573517Z","caller":"traceutil/trace.go:172","msg":"trace[433506646] range","detail":"{range_begin:/registry/minions/embed-certs-912564; range_end:; response_count:1; response_revision:405; }","duration":"104.751456ms","start":"2025-12-27T09:35:43.468754Z","end":"2025-12-27T09:35:43.573506Z","steps":["trace[433506646] 'agreement among raft nodes before linearized reading'  (duration: 104.612889ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:35:43.576527Z","caller":"traceutil/trace.go:172","msg":"trace[1620085575] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"128.272512ms","start":"2025-12-27T09:35:43.448238Z","end":"2025-12-27T09:35:43.576510Z","steps":["trace[1620085575] 'process raft request'  (duration: 128.073591ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:35:47.705573Z","caller":"traceutil/trace.go:172","msg":"trace[1943508034] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"129.505163ms","start":"2025-12-27T09:35:47.576052Z","end":"2025-12-27T09:35:47.705557Z","steps":["trace[1943508034] 'process raft request'  (duration: 129.415345ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:35:50.784353Z","caller":"traceutil/trace.go:172","msg":"trace[492365321] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"137.299213ms","start":"2025-12-27T09:35:50.647035Z","end":"2025-12-27T09:35:50.784334Z","steps":["trace[492365321] 'process raft request'  (duration: 137.199933ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:35:57 up  1:18,  0 user,  load average: 2.97, 3.04, 2.25
	Linux embed-certs-912564 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ddd5d0a9ec973b20e5afa5cb254b981d0c1b100acd7981f6a2c7259245bd5fd6] <==
	I1227 09:35:32.780470       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:35:32.780745       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1227 09:35:32.780916       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:35:32.780937       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:35:32.780969       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:35:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:35:32.983680       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:35:32.983769       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:35:32.983806       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:35:33.076395       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:35:33.376319       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:35:33.376350       1 metrics.go:72] Registering metrics
	I1227 09:35:33.376421       1 controller.go:711] "Syncing nftables rules"
	I1227 09:35:42.983378       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:35:42.983444       1 main.go:301] handling current node
	I1227 09:35:52.984707       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:35:52.984776       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0e1f8e1e1379dac8f67acb372e3f145c6099679a0f8794e68fa7213d800a739e] <==
	E1227 09:35:22.124074       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1227 09:35:22.128043       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 09:35:22.128237       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:35:22.128484       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:22.129599       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:35:22.326739       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:35:23.032934       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 09:35:23.036738       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 09:35:23.036761       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:35:23.471614       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:35:23.505186       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:35:23.626340       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 09:35:23.633181       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1227 09:35:23.634113       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:35:23.637568       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:35:24.056918       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:35:24.688996       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:35:24.701248       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 09:35:24.708524       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 09:35:29.662753       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:35:29.669706       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:35:30.009157       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:35:30.057753       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1227 09:35:30.057753       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1227 09:35:56.195420       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:54776: use of closed network connection
	
	
	==> kube-controller-manager [9eace4231039031d7aa61e9248451f19dfdc1b9bab093c3b38d286bc1e4aff1d] <==
	I1227 09:35:28.860264       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.861329       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.861370       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.861657       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.860243       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.860737       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.861785       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.861878       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.861975       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.860249       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.862176       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.862348       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.860187       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.860236       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.861950       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 09:35:28.860668       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.861897       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.865114       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.870132       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:35:28.871346       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-912564" podCIDRs=["10.244.0.0/24"]
	I1227 09:35:28.961077       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:28.961097       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:35:28.961101       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:35:28.971347       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:44.070329       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [0c6af9d0aa5df8cabe56472b898569e4d38d06046c5043224236b95ee4e75358] <==
	I1227 09:35:30.508695       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:35:30.587048       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:35:30.687583       1 shared_informer.go:377] "Caches are synced"
	I1227 09:35:30.687635       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1227 09:35:30.687810       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:35:30.722753       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:35:30.722829       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:35:30.729356       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:35:30.729812       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:35:30.729853       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:35:30.731270       1 config.go:200] "Starting service config controller"
	I1227 09:35:30.731298       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:35:30.731354       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:35:30.731370       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:35:30.731394       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:35:30.731400       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:35:30.731545       1 config.go:309] "Starting node config controller"
	I1227 09:35:30.731554       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:35:30.731560       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:35:30.831504       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:35:30.831512       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 09:35:30.831528       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5cdf146d85759dcb677923671580470f646cfc079ecabb5da5cb77fa7ad8423d] <==
	E1227 09:35:22.087537       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:35:22.088248       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:35:22.088411       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:35:22.088576       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:35:22.088781       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:35:22.089447       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:35:22.089516       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:35:22.089580       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:35:22.089629       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:35:22.089712       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:35:22.089760       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:35:22.089770       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:35:22.089903       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:35:22.089954       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:35:22.089973       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:35:22.089977       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:35:22.920687       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:35:22.958536       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:35:22.961163       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:35:23.001433       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:35:23.061073       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:35:23.077097       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:35:23.113374       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:35:23.335309       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1227 09:35:25.782171       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:35:30 embed-certs-912564 kubelet[1302]: I1227 09:35:30.163331    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73083928-8435-4e2e-913b-ff93fa424106-lib-modules\") pod \"kindnet-bznfn\" (UID: \"73083928-8435-4e2e-913b-ff93fa424106\") " pod="kube-system/kindnet-bznfn"
	Dec 27 09:35:30 embed-certs-912564 kubelet[1302]: I1227 09:35:30.163356    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvjvl\" (UniqueName: \"kubernetes.io/projected/73083928-8435-4e2e-913b-ff93fa424106-kube-api-access-fvjvl\") pod \"kindnet-bznfn\" (UID: \"73083928-8435-4e2e-913b-ff93fa424106\") " pod="kube-system/kindnet-bznfn"
	Dec 27 09:35:30 embed-certs-912564 kubelet[1302]: I1227 09:35:30.163378    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a923e9f-87c7-472f-b5b9-506bcdc67cb3-kube-proxy\") pod \"kube-proxy-dv8ch\" (UID: \"2a923e9f-87c7-472f-b5b9-506bcdc67cb3\") " pod="kube-system/kube-proxy-dv8ch"
	Dec 27 09:35:30 embed-certs-912564 kubelet[1302]: I1227 09:35:30.163396    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a923e9f-87c7-472f-b5b9-506bcdc67cb3-lib-modules\") pod \"kube-proxy-dv8ch\" (UID: \"2a923e9f-87c7-472f-b5b9-506bcdc67cb3\") " pod="kube-system/kube-proxy-dv8ch"
	Dec 27 09:35:30 embed-certs-912564 kubelet[1302]: I1227 09:35:30.597027    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-dv8ch" podStartSLOduration=0.597005913 podStartE2EDuration="597.005913ms" podCreationTimestamp="2025-12-27 09:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:35:30.596983164 +0000 UTC m=+6.145754347" watchObservedRunningTime="2025-12-27 09:35:30.597005913 +0000 UTC m=+6.145777089"
	Dec 27 09:35:31 embed-certs-912564 kubelet[1302]: E1227 09:35:31.273011    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-912564" containerName="kube-controller-manager"
	Dec 27 09:35:31 embed-certs-912564 kubelet[1302]: E1227 09:35:31.605007    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-912564" containerName="kube-scheduler"
	Dec 27 09:35:33 embed-certs-912564 kubelet[1302]: I1227 09:35:33.601217    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-bznfn" podStartSLOduration=1.391459679 podStartE2EDuration="3.601199805s" podCreationTimestamp="2025-12-27 09:35:30 +0000 UTC" firstStartedPulling="2025-12-27 09:35:30.395322491 +0000 UTC m=+5.944093658" lastFinishedPulling="2025-12-27 09:35:32.605062609 +0000 UTC m=+8.153833784" observedRunningTime="2025-12-27 09:35:33.600943498 +0000 UTC m=+9.149714706" watchObservedRunningTime="2025-12-27 09:35:33.601199805 +0000 UTC m=+9.149970982"
	Dec 27 09:35:36 embed-certs-912564 kubelet[1302]: E1227 09:35:36.434648    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-912564" containerName="kube-apiserver"
	Dec 27 09:35:36 embed-certs-912564 kubelet[1302]: E1227 09:35:36.819883    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-912564" containerName="etcd"
	Dec 27 09:35:41 embed-certs-912564 kubelet[1302]: E1227 09:35:41.278476    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-912564" containerName="kube-controller-manager"
	Dec 27 09:35:41 embed-certs-912564 kubelet[1302]: E1227 09:35:41.611378    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-912564" containerName="kube-scheduler"
	Dec 27 09:35:43 embed-certs-912564 kubelet[1302]: I1227 09:35:43.287214    1302 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 09:35:43 embed-certs-912564 kubelet[1302]: I1227 09:35:43.658762    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/af70aaa7-5435-48e3-8275-f12100402980-tmp\") pod \"storage-provisioner\" (UID: \"af70aaa7-5435-48e3-8275-f12100402980\") " pod="kube-system/storage-provisioner"
	Dec 27 09:35:43 embed-certs-912564 kubelet[1302]: I1227 09:35:43.659052    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k6nh\" (UniqueName: \"kubernetes.io/projected/e07c8612-a077-44b5-b84f-6dda3bc90a64-kube-api-access-4k6nh\") pod \"coredns-7d764666f9-vm5hp\" (UID: \"e07c8612-a077-44b5-b84f-6dda3bc90a64\") " pod="kube-system/coredns-7d764666f9-vm5hp"
	Dec 27 09:35:43 embed-certs-912564 kubelet[1302]: I1227 09:35:43.659172    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7whg\" (UniqueName: \"kubernetes.io/projected/af70aaa7-5435-48e3-8275-f12100402980-kube-api-access-c7whg\") pod \"storage-provisioner\" (UID: \"af70aaa7-5435-48e3-8275-f12100402980\") " pod="kube-system/storage-provisioner"
	Dec 27 09:35:43 embed-certs-912564 kubelet[1302]: I1227 09:35:43.659222    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e07c8612-a077-44b5-b84f-6dda3bc90a64-config-volume\") pod \"coredns-7d764666f9-vm5hp\" (UID: \"e07c8612-a077-44b5-b84f-6dda3bc90a64\") " pod="kube-system/coredns-7d764666f9-vm5hp"
	Dec 27 09:35:44 embed-certs-912564 kubelet[1302]: E1227 09:35:44.617463    1302 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vm5hp" containerName="coredns"
	Dec 27 09:35:44 embed-certs-912564 kubelet[1302]: I1227 09:35:44.638529    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.638510177 podStartE2EDuration="14.638510177s" podCreationTimestamp="2025-12-27 09:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:35:44.637828594 +0000 UTC m=+20.186599770" watchObservedRunningTime="2025-12-27 09:35:44.638510177 +0000 UTC m=+20.187281349"
	Dec 27 09:35:44 embed-certs-912564 kubelet[1302]: I1227 09:35:44.638617    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-vm5hp" podStartSLOduration=14.638612948 podStartE2EDuration="14.638612948s" podCreationTimestamp="2025-12-27 09:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:35:44.62894704 +0000 UTC m=+20.177718223" watchObservedRunningTime="2025-12-27 09:35:44.638612948 +0000 UTC m=+20.187384124"
	Dec 27 09:35:45 embed-certs-912564 kubelet[1302]: E1227 09:35:45.622710    1302 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vm5hp" containerName="coredns"
	Dec 27 09:35:46 embed-certs-912564 kubelet[1302]: E1227 09:35:46.626014    1302 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vm5hp" containerName="coredns"
	Dec 27 09:35:47 embed-certs-912564 kubelet[1302]: I1227 09:35:47.185669    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zscnp\" (UniqueName: \"kubernetes.io/projected/274a708a-11df-4cc3-b67f-309656c1f9c6-kube-api-access-zscnp\") pod \"busybox\" (UID: \"274a708a-11df-4cc3-b67f-309656c1f9c6\") " pod="default/busybox"
	Dec 27 09:35:50 embed-certs-912564 kubelet[1302]: I1227 09:35:50.786026    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.784957803 podStartE2EDuration="3.786009774s" podCreationTimestamp="2025-12-27 09:35:47 +0000 UTC" firstStartedPulling="2025-12-27 09:35:47.573921974 +0000 UTC m=+23.122693140" lastFinishedPulling="2025-12-27 09:35:49.574973958 +0000 UTC m=+25.123745111" observedRunningTime="2025-12-27 09:35:50.785809451 +0000 UTC m=+26.334580630" watchObservedRunningTime="2025-12-27 09:35:50.786009774 +0000 UTC m=+26.334780947"
	Dec 27 09:35:56 embed-certs-912564 kubelet[1302]: E1227 09:35:56.195207    1302 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41128->127.0.0.1:42067: write tcp 127.0.0.1:41128->127.0.0.1:42067: write: broken pipe
	
	
	==> storage-provisioner [5be5dc993000ab8f2d3004bc8fd46843df73f1a2de7f45004e5bb338344071fe] <==
	I1227 09:35:43.960701       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:35:43.970564       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:35:43.970606       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 09:35:43.973741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:43.982050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:35:43.982248       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:35:43.982389       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82a4b092-3ec4-4d7c-8528-91199d1bbfdd", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-912564_29798fcd-7f20-426d-9ed2-ce6651a3baab became leader
	I1227 09:35:43.982459       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-912564_29798fcd-7f20-426d-9ed2-ce6651a3baab!
	W1227 09:35:43.985053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:43.992930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:35:44.083231       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-912564_29798fcd-7f20-426d-9ed2-ce6651a3baab!
	W1227 09:35:45.996403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:46.025765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:48.030369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:48.035671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:50.040006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:50.045639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:52.048858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:52.071217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:54.075126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:54.079878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:56.083412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:35:56.088260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-912564 -n embed-certs-912564
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-912564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (304.364322ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-963457 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-963457 describe deploy/metrics-server -n kube-system: exit status 1 (72.924266ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-963457 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-963457
helpers_test.go:244: (dbg) docker inspect no-preload-963457:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177",
	        "Created": "2025-12-27T09:35:31.385556523Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 611155,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:35:31.413493245Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/hosts",
	        "LogPath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177-json.log",
	        "Name": "/no-preload-963457",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-963457:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-963457",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177",
	                "LowerDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-963457",
	                "Source": "/var/lib/docker/volumes/no-preload-963457/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-963457",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-963457",
	                "name.minikube.sigs.k8s.io": "no-preload-963457",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c0669311db47b41afa7248567dd41fb48159bc4979651148003acf198d4e6750",
	            "SandboxKey": "/var/run/docker/netns/c0669311db47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-963457": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e27fce9ec482a6f231f0cd34fc8f67937ff2dfde3915e36e90c5e0b4fd43cbe7",
	                    "EndpointID": "f3159e3e3a7aef672b9ebe175977cc9ca0c77629a379b0d55564f2b57ca89634",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "06:a9:6b:ba:15:1d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-963457",
	                        "0e530c327725"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963457 -n no-preload-963457
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-963457 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-963457 logs -n 25: (1.112138859s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p test-preload-805186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio                                                                                                                  │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p running-upgrade-561421 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-561421       │ jenkins │ v1.37.0 │ 27 Dec 25 09:33 UTC │ 27 Dec 25 09:34 UTC │
	│ delete  │ -p running-upgrade-561421                                                                                                                                                                                                                     │ running-upgrade-561421       │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:35 UTC │
	│ image   │ test-preload-805186 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                                                                                                                   │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ stop    │ -p test-preload-805186                                                                                                                                                                                                                        │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p test-preload-805186 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p cert-expiration-237269 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-237269       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p cert-expiration-237269                                                                                                                                                                                                                     │ cert-expiration-237269       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-094398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p old-k8s-version-094398 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ image   │ test-preload-805186 image list                                                                                                                                                                                                                │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p test-preload-805186                                                                                                                                                                                                                        │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-917808                                                                                                                                                                                                               │ disable-driver-mounts-917808 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ delete  │ -p stopped-upgrade-196124                                                                                                                                                                                                                     │ stopped-upgrade-196124       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-912564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-912564 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-912564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:36:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:36:15.755856  622335 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:36:15.755997  622335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:15.756005  622335 out.go:374] Setting ErrFile to fd 2...
	I1227 09:36:15.756012  622335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:15.756228  622335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:36:15.756685  622335 out.go:368] Setting JSON to false
	I1227 09:36:15.758150  622335 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4720,"bootTime":1766823456,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:36:15.758213  622335 start.go:143] virtualization: kvm guest
	I1227 09:36:15.759939  622335 out.go:179] * [embed-certs-912564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:36:15.761016  622335 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:36:15.761014  622335 notify.go:221] Checking for updates...
	I1227 09:36:15.763382  622335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:36:15.764638  622335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:15.765807  622335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:36:15.766905  622335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:36:15.767909  622335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:36:15.769291  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:15.769895  622335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:36:15.793686  622335 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:36:15.793853  622335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:15.849675  622335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:36:15.839729427 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:15.849769  622335 docker.go:319] overlay module found
	I1227 09:36:15.851438  622335 out.go:179] * Using the docker driver based on existing profile
	I1227 09:36:15.852555  622335 start.go:309] selected driver: docker
	I1227 09:36:15.852572  622335 start.go:928] validating driver "docker" against &{Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:15.852663  622335 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:36:15.853278  622335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:15.905518  622335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:36:15.896501582 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:15.905807  622335 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:15.905858  622335 cni.go:84] Creating CNI manager for ""
	I1227 09:36:15.905926  622335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:15.905973  622335 start.go:353] cluster config:
	{Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:15.907451  622335 out.go:179] * Starting "embed-certs-912564" primary control-plane node in "embed-certs-912564" cluster
	I1227 09:36:15.908326  622335 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:36:15.909241  622335 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:36:15.910102  622335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:15.910131  622335 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:36:15.910156  622335 cache.go:65] Caching tarball of preloaded images
	I1227 09:36:15.910205  622335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:36:15.910262  622335 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:36:15.910273  622335 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:36:15.910379  622335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/config.json ...
	I1227 09:36:15.929803  622335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:36:15.929822  622335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:36:15.929849  622335 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:36:15.929884  622335 start.go:360] acquireMachinesLock for embed-certs-912564: {Name:mk61b0f1dd44336f66b7ae60f44b102943279f72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:36:15.929937  622335 start.go:364] duration metric: took 35.4µs to acquireMachinesLock for "embed-certs-912564"
	I1227 09:36:15.929953  622335 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:36:15.929958  622335 fix.go:54] fixHost starting: 
	I1227 09:36:15.930186  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:15.946167  622335 fix.go:112] recreateIfNeeded on embed-certs-912564: state=Stopped err=<nil>
	W1227 09:36:15.946200  622335 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 09:36:14.163271  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:16.163948  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:13.261954  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:15.262540  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:17.761765  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:36:17.207000  610436 node_ready.go:49] node "no-preload-963457" is "Ready"
	I1227 09:36:17.207025  610436 node_ready.go:38] duration metric: took 13.502511991s for node "no-preload-963457" to be "Ready" ...
	I1227 09:36:17.207039  610436 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:17.207085  610436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:17.219077  610436 api_server.go:72] duration metric: took 13.880363312s to wait for apiserver process to appear ...
	I1227 09:36:17.219099  610436 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:17.219117  610436 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:36:17.224033  610436 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 09:36:17.225019  610436 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:17.225043  610436 api_server.go:131] duration metric: took 5.936968ms to wait for apiserver health ...
	I1227 09:36:17.225053  610436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:17.227917  610436 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:17.227951  610436 system_pods.go:61] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.227958  610436 system_pods.go:61] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.227966  610436 system_pods.go:61] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.227970  610436 system_pods.go:61] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.227978  610436 system_pods.go:61] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.227987  610436 system_pods.go:61] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.227997  610436 system_pods.go:61] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.228004  610436 system_pods.go:61] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.228015  610436 system_pods.go:74] duration metric: took 2.954672ms to wait for pod list to return data ...
	I1227 09:36:17.228026  610436 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:17.230166  610436 default_sa.go:45] found service account: "default"
	I1227 09:36:17.230187  610436 default_sa.go:55] duration metric: took 2.152948ms for default service account to be created ...
	I1227 09:36:17.230195  610436 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:17.232590  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.232614  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.232621  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.232626  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.232629  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.232633  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.232636  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.232639  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.232647  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.232678  610436 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 09:36:17.541732  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.541764  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.541770  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.541776  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.541780  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.541785  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.541815  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.541822  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.541831  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.912221  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.912247  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Running
	I1227 09:36:17.912252  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.912255  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.912259  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.912262  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.912265  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.912269  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.912272  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Running
	I1227 09:36:17.912279  610436 system_pods.go:126] duration metric: took 682.077772ms to wait for k8s-apps to be running ...
	I1227 09:36:17.912286  610436 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:17.912328  610436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:17.925757  610436 system_svc.go:56] duration metric: took 13.459067ms WaitForService to wait for kubelet
	I1227 09:36:17.925808  610436 kubeadm.go:587] duration metric: took 14.587094691s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:17.925832  610436 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:17.928354  610436 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:17.928377  610436 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:17.928390  610436 node_conditions.go:105] duration metric: took 2.552518ms to run NodePressure ...
	I1227 09:36:17.928402  610436 start.go:242] waiting for startup goroutines ...
	I1227 09:36:17.928411  610436 start.go:247] waiting for cluster config update ...
	I1227 09:36:17.928428  610436 start.go:256] writing updated cluster config ...
	I1227 09:36:17.928688  610436 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:17.932505  610436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:18.012128  610436 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.016092  610436 pod_ready.go:94] pod "coredns-7d764666f9-wnzhx" is "Ready"
	I1227 09:36:18.016113  610436 pod_ready.go:86] duration metric: took 3.954033ms for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.018039  610436 pod_ready.go:83] waiting for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.021308  610436 pod_ready.go:94] pod "etcd-no-preload-963457" is "Ready"
	I1227 09:36:18.021328  610436 pod_ready.go:86] duration metric: took 3.271462ms for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.022843  610436 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.025875  610436 pod_ready.go:94] pod "kube-apiserver-no-preload-963457" is "Ready"
	I1227 09:36:18.025892  610436 pod_ready.go:86] duration metric: took 3.027767ms for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.027544  610436 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.336856  610436 pod_ready.go:94] pod "kube-controller-manager-no-preload-963457" is "Ready"
	I1227 09:36:18.336887  610436 pod_ready.go:86] duration metric: took 309.32474ms for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.537212  610436 pod_ready.go:83] waiting for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.936288  610436 pod_ready.go:94] pod "kube-proxy-grkqs" is "Ready"
	I1227 09:36:18.936315  610436 pod_ready.go:86] duration metric: took 399.078348ms for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.137254  610436 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.536512  610436 pod_ready.go:94] pod "kube-scheduler-no-preload-963457" is "Ready"
	I1227 09:36:19.536545  610436 pod_ready.go:86] duration metric: took 399.259363ms for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.536571  610436 pod_ready.go:40] duration metric: took 1.604026487s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:19.582579  610436 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:36:19.584481  610436 out.go:179] * Done! kubectl is now configured to use "no-preload-963457" cluster and "default" namespace by default
	I1227 09:36:15.947788  622335 out.go:252] * Restarting existing docker container for "embed-certs-912564" ...
	I1227 09:36:15.947868  622335 cli_runner.go:164] Run: docker start embed-certs-912564
	I1227 09:36:16.186477  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:16.204808  622335 kic.go:430] container "embed-certs-912564" state is running.
	I1227 09:36:16.205231  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:16.224487  622335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/config.json ...
	I1227 09:36:16.224742  622335 machine.go:94] provisionDockerMachine start ...
	I1227 09:36:16.224849  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:16.243201  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:16.243427  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:16.243440  622335 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:36:16.244129  622335 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57058->127.0.0.1:33453: read: connection reset by peer
	I1227 09:36:19.367696  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-912564
	
	I1227 09:36:19.367723  622335 ubuntu.go:182] provisioning hostname "embed-certs-912564"
	I1227 09:36:19.367814  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.386757  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.387127  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.387150  622335 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-912564 && echo "embed-certs-912564" | sudo tee /etc/hostname
	I1227 09:36:19.522771  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-912564
	
	I1227 09:36:19.522877  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.543038  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.543358  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.543388  622335 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-912564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-912564/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-912564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:36:19.668353  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:36:19.668380  622335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:36:19.668427  622335 ubuntu.go:190] setting up certificates
	I1227 09:36:19.668447  622335 provision.go:84] configureAuth start
	I1227 09:36:19.668529  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:19.689166  622335 provision.go:143] copyHostCerts
	I1227 09:36:19.689233  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:36:19.689256  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:36:19.689339  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:36:19.689483  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:36:19.689499  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:36:19.689545  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:36:19.689664  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:36:19.689673  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:36:19.689711  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:36:19.689881  622335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-912564 san=[127.0.0.1 192.168.94.2 embed-certs-912564 localhost minikube]
	I1227 09:36:19.746663  622335 provision.go:177] copyRemoteCerts
	I1227 09:36:19.746730  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:36:19.746782  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.766272  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:19.858141  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:36:19.876224  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1227 09:36:19.894481  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:36:19.911686  622335 provision.go:87] duration metric: took 243.216642ms to configureAuth
	I1227 09:36:19.911711  622335 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:36:19.911915  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:19.912029  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.930663  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.930962  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.930983  622335 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:36:20.251003  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:36:20.251031  622335 machine.go:97] duration metric: took 4.026272116s to provisionDockerMachine
	I1227 09:36:20.251046  622335 start.go:293] postStartSetup for "embed-certs-912564" (driver="docker")
	I1227 09:36:20.251060  622335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:36:20.251125  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:36:20.251200  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.272340  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.363700  622335 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:36:20.367711  622335 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:36:20.367734  622335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:36:20.367749  622335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:36:20.367820  622335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:36:20.367922  622335 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:36:20.368051  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:36:20.376361  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:20.393895  622335 start.go:296] duration metric: took 142.830385ms for postStartSetup
	I1227 09:36:20.393981  622335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:20.394046  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.412636  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.501303  622335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:36:20.506127  622335 fix.go:56] duration metric: took 4.576160597s for fixHost
	I1227 09:36:20.506154  622335 start.go:83] releasing machines lock for "embed-certs-912564", held for 4.576205681s
	I1227 09:36:20.506231  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:20.526289  622335 ssh_runner.go:195] Run: cat /version.json
	I1227 09:36:20.526337  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.526345  622335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:36:20.526445  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.546473  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.546990  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.635010  622335 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:20.692254  622335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:36:20.729042  622335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:36:20.734159  622335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:36:20.734289  622335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:36:20.742588  622335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:36:20.742612  622335 start.go:496] detecting cgroup driver to use...
	I1227 09:36:20.742656  622335 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:36:20.742708  622335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:36:20.757772  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:36:20.771033  622335 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:36:20.771095  622335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:36:20.785978  622335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:36:20.799169  622335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:36:20.882315  622335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:36:20.965183  622335 docker.go:234] disabling docker service ...
	I1227 09:36:20.965254  622335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:36:20.980266  622335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:36:20.992591  622335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:36:21.074160  622335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:36:21.160689  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:36:21.174204  622335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:36:21.188429  622335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:36:21.188490  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.197653  622335 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:36:21.197706  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.206508  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.215288  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.224635  622335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:36:21.232876  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.241632  622335 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.250258  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.259256  622335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:36:21.267330  622335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:36:21.274844  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:21.357225  622335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:36:21.513416  622335 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:36:21.513491  622335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:36:21.517802  622335 start.go:574] Will wait 60s for crictl version
	I1227 09:36:21.517863  622335 ssh_runner.go:195] Run: which crictl
	I1227 09:36:21.521539  622335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:36:21.547358  622335 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:36:21.547444  622335 ssh_runner.go:195] Run: crio --version
	I1227 09:36:21.578207  622335 ssh_runner.go:195] Run: crio --version
	I1227 09:36:21.609292  622335 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	W1227 09:36:18.663032  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:20.664243  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	I1227 09:36:21.610434  622335 cli_runner.go:164] Run: docker network inspect embed-certs-912564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:21.628243  622335 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1227 09:36:21.632413  622335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:21.642892  622335 kubeadm.go:884] updating cluster {Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:36:21.643006  622335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:21.643062  622335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:21.677448  622335 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:21.677471  622335 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:36:21.677524  622335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:21.703610  622335 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:21.703636  622335 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:36:21.703645  622335 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1227 09:36:21.703772  622335 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-912564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:36:21.703895  622335 ssh_runner.go:195] Run: crio config
	I1227 09:36:21.750305  622335 cni.go:84] Creating CNI manager for ""
	I1227 09:36:21.750333  622335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:21.750350  622335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:36:21.750373  622335 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-912564 NodeName:embed-certs-912564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:36:21.750509  622335 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-912564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:36:21.750578  622335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:36:21.759704  622335 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:36:21.759777  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:36:21.768072  622335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 09:36:21.781002  622335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:36:21.793925  622335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 09:36:21.806305  622335 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:36:21.809898  622335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:21.820032  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:21.920758  622335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:21.960171  622335 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564 for IP: 192.168.94.2
	I1227 09:36:21.960196  622335 certs.go:195] generating shared ca certs ...
	I1227 09:36:21.960231  622335 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:21.960474  622335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:36:21.960554  622335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:36:21.960569  622335 certs.go:257] generating profile certs ...
	I1227 09:36:21.960701  622335 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/client.key
	I1227 09:36:21.960779  622335 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key.6601433b
	I1227 09:36:21.960888  622335 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key
	I1227 09:36:21.961033  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:36:21.961086  622335 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:36:21.961113  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:36:21.961150  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:36:21.961186  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:36:21.961225  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:36:21.961298  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:21.962178  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:36:21.985651  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:36:22.006677  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:36:22.029280  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:36:22.054424  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 09:36:22.077264  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:36:22.095602  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:36:22.113971  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:36:22.131748  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:36:22.149344  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:36:22.167734  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:36:22.187888  622335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:36:22.201027  622335 ssh_runner.go:195] Run: openssl version
	I1227 09:36:22.207221  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.214467  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:36:22.221999  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.226212  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.226259  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.269710  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:36:22.277804  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.285678  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:36:22.293081  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.297054  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.297104  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.331452  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:36:22.339171  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.347116  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:36:22.354513  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.358217  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.358268  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.394066  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:36:22.402772  622335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:36:22.406779  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:36:22.443933  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:36:22.482195  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:36:22.529477  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:36:22.575213  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:36:22.634783  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:36:22.671980  622335 kubeadm.go:401] StartCluster: {Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:22.672057  622335 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:36:22.672116  622335 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:36:22.705162  622335 cri.go:96] found id: "5383d4cdce95af97f9b9e8e07db61c856f19c8db586c179d8ff736a43046829e"
	I1227 09:36:22.705186  622335 cri.go:96] found id: "4073c03ac98fe56856e504a1aa0d5a1748d26e6ce500dc31ad8e91ee49384cd6"
	I1227 09:36:22.705192  622335 cri.go:96] found id: "ba83fd494a8c5a1bc7eb22555934e2b74494963aa284a3786fa73f76c60a9175"
	I1227 09:36:22.705196  622335 cri.go:96] found id: "663c76b88f42532f7c763b6916bdc80252b590b27aa690c8fe09d547aca1eb6c"
	I1227 09:36:22.705205  622335 cri.go:96] found id: ""
	I1227 09:36:22.705250  622335 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:36:22.718695  622335 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:22Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:36:22.718785  622335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:36:22.726975  622335 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:36:22.726995  622335 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:36:22.727046  622335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:36:22.736032  622335 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:36:22.737138  622335 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-912564" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:22.737771  622335 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-912564" cluster setting kubeconfig missing "embed-certs-912564" context setting]
	I1227 09:36:22.738693  622335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.740844  622335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:36:22.750818  622335 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1227 09:36:22.750853  622335 kubeadm.go:602] duration metric: took 23.85154ms to restartPrimaryControlPlane
	I1227 09:36:22.750864  622335 kubeadm.go:403] duration metric: took 78.893214ms to StartCluster
	I1227 09:36:22.750883  622335 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.750952  622335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:22.753086  622335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.753360  622335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:36:22.753437  622335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:36:22.753532  622335 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-912564"
	I1227 09:36:22.753555  622335 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-912564"
	I1227 09:36:22.753556  622335 addons.go:70] Setting dashboard=true in profile "embed-certs-912564"
	W1227 09:36:22.753563  622335 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:36:22.753573  622335 addons.go:239] Setting addon dashboard=true in "embed-certs-912564"
	W1227 09:36:22.753581  622335 addons.go:248] addon dashboard should already be in state true
	I1227 09:36:22.753576  622335 addons.go:70] Setting default-storageclass=true in profile "embed-certs-912564"
	I1227 09:36:22.753593  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.753606  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.753608  622335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-912564"
	I1227 09:36:22.753609  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:22.753938  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.754139  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.754187  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.779956  622335 out.go:179] * Verifying Kubernetes components...
	I1227 09:36:22.780837  622335 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:36:22.780872  622335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:36:22.781059  622335 addons.go:239] Setting addon default-storageclass=true in "embed-certs-912564"
	W1227 09:36:22.781224  622335 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:36:22.781269  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.781577  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:22.781774  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.784055  622335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:22.784074  622335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:36:22.784123  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.785174  622335 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1227 09:36:19.763616  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:22.263259  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:36:22.786204  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:36:22.786220  622335 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:36:22.786279  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.807331  622335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:22.807359  622335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:36:22.807427  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.807672  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.811339  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.843561  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.926282  622335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:22.928069  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:22.937857  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:36:22.937883  622335 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:36:22.940823  622335 node_ready.go:35] waiting up to 6m0s for node "embed-certs-912564" to be "Ready" ...
	I1227 09:36:22.952448  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:36:22.952469  622335 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:36:22.954110  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:22.967202  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:36:22.967226  622335 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:36:22.981928  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:36:22.981953  622335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:36:22.998766  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:36:22.998803  622335 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:36:23.012500  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:36:23.012534  622335 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:36:23.025289  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:36:23.025315  622335 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:36:23.037841  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:36:23.037868  622335 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:36:23.050151  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:36:23.050172  622335 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:36:23.063127  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:36:24.188905  622335 node_ready.go:49] node "embed-certs-912564" is "Ready"
	I1227 09:36:24.188945  622335 node_ready.go:38] duration metric: took 1.248089417s for node "embed-certs-912564" to be "Ready" ...
	I1227 09:36:24.188966  622335 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:24.189025  622335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:24.706854  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.778752913s)
	I1227 09:36:24.706946  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.75280549s)
	I1227 09:36:24.707028  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.643865472s)
	I1227 09:36:24.707081  622335 api_server.go:72] duration metric: took 1.953688648s to wait for apiserver process to appear ...
	I1227 09:36:24.707107  622335 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:24.707132  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:24.708777  622335 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-912564 addons enable metrics-server
	
	I1227 09:36:24.713189  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:36:24.713214  622335 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:36:24.718380  622335 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:36:24.719363  622335 addons.go:530] duration metric: took 1.965938957s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:36:25.207981  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:25.212198  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:36:25.212222  622335 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:36:25.707923  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:25.712098  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1227 09:36:25.712980  622335 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:25.713005  622335 api_server.go:131] duration metric: took 1.005888464s to wait for apiserver health ...
	I1227 09:36:25.713013  622335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:25.716444  622335 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:25.716489  622335 system_pods.go:61] "coredns-7d764666f9-vm5hp" [e07c8612-a077-44b5-b84f-6dda3bc90a64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:25.716503  622335 system_pods.go:61] "etcd-embed-certs-912564" [05ab5aa9-c66f-449d-bb47-6c48d44d1db7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:36:25.716510  622335 system_pods.go:61] "kindnet-bznfn" [73083928-8435-4e2e-913b-ff93fa424106] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:36:25.716517  622335 system_pods.go:61] "kube-apiserver-embed-certs-912564" [16628f2b-e9fa-4772-b8cb-8ef74d603b7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:36:25.716522  622335 system_pods.go:61] "kube-controller-manager-embed-certs-912564" [78314a52-dd39-4d37-9d70-8002b392e928] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:36:25.716532  622335 system_pods.go:61] "kube-proxy-dv8ch" [2a923e9f-87c7-472f-b5b9-506bcdc67cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:36:25.716537  622335 system_pods.go:61] "kube-scheduler-embed-certs-912564" [81aa6d57-9095-411f-b4a0-653e59fccb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:36:25.716542  622335 system_pods.go:61] "storage-provisioner" [af70aaa7-5435-48e3-8275-f12100402980] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:25.716548  622335 system_pods.go:74] duration metric: took 3.528996ms to wait for pod list to return data ...
	I1227 09:36:25.716555  622335 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:25.719151  622335 default_sa.go:45] found service account: "default"
	I1227 09:36:25.719169  622335 default_sa.go:55] duration metric: took 2.608678ms for default service account to be created ...
	I1227 09:36:25.719176  622335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:25.721738  622335 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:25.721763  622335 system_pods.go:89] "coredns-7d764666f9-vm5hp" [e07c8612-a077-44b5-b84f-6dda3bc90a64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:25.721771  622335 system_pods.go:89] "etcd-embed-certs-912564" [05ab5aa9-c66f-449d-bb47-6c48d44d1db7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:36:25.721778  622335 system_pods.go:89] "kindnet-bznfn" [73083928-8435-4e2e-913b-ff93fa424106] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:36:25.721786  622335 system_pods.go:89] "kube-apiserver-embed-certs-912564" [16628f2b-e9fa-4772-b8cb-8ef74d603b7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:36:25.721805  622335 system_pods.go:89] "kube-controller-manager-embed-certs-912564" [78314a52-dd39-4d37-9d70-8002b392e928] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:36:25.721816  622335 system_pods.go:89] "kube-proxy-dv8ch" [2a923e9f-87c7-472f-b5b9-506bcdc67cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:36:25.721831  622335 system_pods.go:89] "kube-scheduler-embed-certs-912564" [81aa6d57-9095-411f-b4a0-653e59fccb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:36:25.721839  622335 system_pods.go:89] "storage-provisioner" [af70aaa7-5435-48e3-8275-f12100402980] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:25.721850  622335 system_pods.go:126] duration metric: took 2.668061ms to wait for k8s-apps to be running ...
	I1227 09:36:25.721859  622335 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:25.721906  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:25.735465  622335 system_svc.go:56] duration metric: took 13.598326ms WaitForService to wait for kubelet
	I1227 09:36:25.735491  622335 kubeadm.go:587] duration metric: took 2.98210021s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:25.735514  622335 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:25.737852  622335 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:25.737873  622335 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:25.737888  622335 node_conditions.go:105] duration metric: took 2.365444ms to run NodePressure ...
	I1227 09:36:25.737899  622335 start.go:242] waiting for startup goroutines ...
	I1227 09:36:25.737908  622335 start.go:247] waiting for cluster config update ...
	I1227 09:36:25.737919  622335 start.go:256] writing updated cluster config ...
	I1227 09:36:25.738145  622335 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:25.741968  622335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:25.745155  622335 pod_ready.go:83] waiting for pod "coredns-7d764666f9-vm5hp" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:36:22.664879  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	I1227 09:36:23.163413  616179 node_ready.go:49] node "default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:23.163451  616179 node_ready.go:38] duration metric: took 13.503160256s for node "default-k8s-diff-port-497722" to be "Ready" ...
	I1227 09:36:23.163470  616179 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:23.163548  616179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:23.178899  616179 api_server.go:72] duration metric: took 13.808108711s to wait for apiserver process to appear ...
	I1227 09:36:23.178931  616179 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:23.178966  616179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1227 09:36:23.183768  616179 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1227 09:36:23.184969  616179 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:23.184994  616179 api_server.go:131] duration metric: took 6.056457ms to wait for apiserver health ...
	I1227 09:36:23.185003  616179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:23.188536  616179 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:23.188575  616179 system_pods.go:61] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.188586  616179 system_pods.go:61] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.188599  616179 system_pods.go:61] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.188605  616179 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.188611  616179 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.188619  616179 system_pods.go:61] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.188624  616179 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.188633  616179 system_pods.go:61] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:23.188641  616179 system_pods.go:74] duration metric: took 3.631884ms to wait for pod list to return data ...
	I1227 09:36:23.188655  616179 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:23.191155  616179 default_sa.go:45] found service account: "default"
	I1227 09:36:23.191180  616179 default_sa.go:55] duration metric: took 2.516479ms for default service account to be created ...
	I1227 09:36:23.191191  616179 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:23.194108  616179 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:23.194138  616179 system_pods.go:89] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.194145  616179 system_pods.go:89] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.194154  616179 system_pods.go:89] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.194160  616179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.194165  616179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.194171  616179 system_pods.go:89] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.194175  616179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.194179  616179 system_pods.go:89] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:23.194203  616179 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 09:36:23.406401  616179 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:23.406445  616179 system_pods.go:89] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.406454  616179 system_pods.go:89] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.406461  616179 system_pods.go:89] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.406467  616179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.406473  616179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.406478  616179 system_pods.go:89] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.406483  616179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.406488  616179 system_pods.go:89] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Running
	I1227 09:36:23.406499  616179 system_pods.go:126] duration metric: took 215.299714ms to wait for k8s-apps to be running ...
	I1227 09:36:23.406513  616179 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:23.406568  616179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:23.422554  616179 system_svc.go:56] duration metric: took 16.02927ms WaitForService to wait for kubelet
	I1227 09:36:23.422585  616179 kubeadm.go:587] duration metric: took 14.05180013s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:23.422606  616179 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:23.425525  616179 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:23.425563  616179 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:23.425603  616179 node_conditions.go:105] duration metric: took 2.990468ms to run NodePressure ...
	I1227 09:36:23.425622  616179 start.go:242] waiting for startup goroutines ...
	I1227 09:36:23.425633  616179 start.go:247] waiting for cluster config update ...
	I1227 09:36:23.425646  616179 start.go:256] writing updated cluster config ...
	I1227 09:36:23.426029  616179 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:23.430159  616179 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:23.433730  616179 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.439386  616179 pod_ready.go:94] pod "coredns-7d764666f9-wfv5r" is "Ready"
	I1227 09:36:24.439414  616179 pod_ready.go:86] duration metric: took 1.005660831s for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.442016  616179 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.445826  616179 pod_ready.go:94] pod "etcd-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.445849  616179 pod_ready.go:86] duration metric: took 3.807307ms for pod "etcd-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.447851  616179 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.451495  616179 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.451514  616179 pod_ready.go:86] duration metric: took 3.640701ms for pod "kube-apiserver-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.453395  616179 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.637337  616179 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.637370  616179 pod_ready.go:86] duration metric: took 183.957443ms for pod "kube-controller-manager-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.837272  616179 pod_ready.go:83] waiting for pod "kube-proxy-6z4vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.238026  616179 pod_ready.go:94] pod "kube-proxy-6z4vt" is "Ready"
	I1227 09:36:25.238052  616179 pod_ready.go:86] duration metric: took 400.752514ms for pod "kube-proxy-6z4vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.437923  616179 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.837173  616179 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:25.837207  616179 pod_ready.go:86] duration metric: took 399.25682ms for pod "kube-scheduler-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.837234  616179 pod_ready.go:40] duration metric: took 2.40703441s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:25.883242  616179 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:36:25.886195  616179 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-497722" cluster and "default" namespace by default
	W1227 09:36:24.265838  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:26.761645  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 09:36:17 no-preload-963457 crio[771]: time="2025-12-27T09:36:17.362728743Z" level=info msg="Starting container: 1762f5e5378208135de4ae847c490cf3369d9e9cb0b11952fddbf65bc1a040a1" id=ea1ef7f1-68b2-4c99-bbcd-7c36300429bb name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:17 no-preload-963457 crio[771]: time="2025-12-27T09:36:17.364389141Z" level=info msg="Started container" PID=2788 containerID=1762f5e5378208135de4ae847c490cf3369d9e9cb0b11952fddbf65bc1a040a1 description=kube-system/coredns-7d764666f9-wnzhx/coredns id=ea1ef7f1-68b2-4c99-bbcd-7c36300429bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=27d8de6f762f4b725f97ccad0581ec076ae0f12683d5bae83984f718fd2b2b33
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.035341535Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c467cefb-79aa-44f9-9b40-e1067ce90da7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.035428406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.040892065Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c4c44198be9dfbd0736711f93f96cfcc650d75016e2f84dc49ec2647b60ec91b UID:ad53255a-3a97-45fe-bf01-72e0602f22fa NetNS:/var/run/netns/a9b62b23-8448-490f-864c-2ecbbae876a0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000904fe0}] Aliases:map[]}"
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.040927988Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.05055731Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c4c44198be9dfbd0736711f93f96cfcc650d75016e2f84dc49ec2647b60ec91b UID:ad53255a-3a97-45fe-bf01-72e0602f22fa NetNS:/var/run/netns/a9b62b23-8448-490f-864c-2ecbbae876a0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000904fe0}] Aliases:map[]}"
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.050730276Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.051758192Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.052948333Z" level=info msg="Ran pod sandbox c4c44198be9dfbd0736711f93f96cfcc650d75016e2f84dc49ec2647b60ec91b with infra container: default/busybox/POD" id=c467cefb-79aa-44f9-9b40-e1067ce90da7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.054282192Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cf91186c-be33-4ec6-aa43-c6bc509ca950 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.054408474Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cf91186c-be33-4ec6-aa43-c6bc509ca950 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.054456304Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cf91186c-be33-4ec6-aa43-c6bc509ca950 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.055300722Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=14a1ec1e-a9a1-4cb9-80e4-dadca733b556 name=/runtime.v1.ImageService/PullImage
	Dec 27 09:36:20 no-preload-963457 crio[771]: time="2025-12-27T09:36:20.057991826Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 09:36:22 no-preload-963457 crio[771]: time="2025-12-27T09:36:22.014202474Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=14a1ec1e-a9a1-4cb9-80e4-dadca733b556 name=/runtime.v1.ImageService/PullImage
	Dec 27 09:36:22 no-preload-963457 crio[771]: time="2025-12-27T09:36:22.014910567Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6df6c6b6-26b0-41d6-8b71-4d375e0111e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:22 no-preload-963457 crio[771]: time="2025-12-27T09:36:22.016893967Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=38bf558c-2e26-4308-8266-37cc7bf8701b name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:22 no-preload-963457 crio[771]: time="2025-12-27T09:36:22.020960297Z" level=info msg="Creating container: default/busybox/busybox" id=839b9cf6-bd60-466b-8a00-9eb5bebddaa6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:22 no-preload-963457 crio[771]: time="2025-12-27T09:36:22.021089041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:22 no-preload-963457 crio[771]: time="2025-12-27T09:36:22.025502731Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:22 no-preload-963457 crio[771]: time="2025-12-27T09:36:22.026080521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:22 no-preload-963457 crio[771]: time="2025-12-27T09:36:22.068655679Z" level=info msg="Created container 1b1e29043c4cb1fffff225400ad91f8def5bbcf2c3c60c8447e5a243966a2368: default/busybox/busybox" id=839b9cf6-bd60-466b-8a00-9eb5bebddaa6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:22 no-preload-963457 crio[771]: time="2025-12-27T09:36:22.06924422Z" level=info msg="Starting container: 1b1e29043c4cb1fffff225400ad91f8def5bbcf2c3c60c8447e5a243966a2368" id=cee2bbd3-42e3-45a7-9aa5-816fbcb5801d name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:22 no-preload-963457 crio[771]: time="2025-12-27T09:36:22.071477284Z" level=info msg="Started container" PID=2863 containerID=1b1e29043c4cb1fffff225400ad91f8def5bbcf2c3c60c8447e5a243966a2368 description=default/busybox/busybox id=cee2bbd3-42e3-45a7-9aa5-816fbcb5801d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4c44198be9dfbd0736711f93f96cfcc650d75016e2f84dc49ec2647b60ec91b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1b1e29043c4cb       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   c4c44198be9df       busybox                                     default
	1762f5e537820       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   27d8de6f762f4       coredns-7d764666f9-wnzhx                    kube-system
	3f2c818829991       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   cef3b9b3396f8       storage-provisioner                         kube-system
	074575c18e2db       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   fe151cf0c19b4       kindnet-7kw8b                               kube-system
	12e7a8f170846       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      26 seconds ago      Running             kube-proxy                0                   01fa83149c46e       kube-proxy-grkqs                            kube-system
	b1216b744a6dc       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      36 seconds ago      Running             kube-controller-manager   0                   28a35ca85a444       kube-controller-manager-no-preload-963457   kube-system
	fff2aa75330ee       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      36 seconds ago      Running             etcd                      0                   2f503ea9eabff       etcd-no-preload-963457                      kube-system
	dc1cba8c50145       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      36 seconds ago      Running             kube-apiserver            0                   3f3bc41804947       kube-apiserver-no-preload-963457            kube-system
	9d363ebe71434       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      36 seconds ago      Running             kube-scheduler            0                   00762534f3c19       kube-scheduler-no-preload-963457            kube-system
	
	
	==> coredns [1762f5e5378208135de4ae847c490cf3369d9e9cb0b11952fddbf65bc1a040a1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58298 - 1611 "HINFO IN 4954382606174725194.15465400994158316. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.064787287s
	
	
	==> describe nodes <==
	Name:               no-preload-963457
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-963457
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=no-preload-963457
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_35_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:35:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-963457
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:36:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:36:28 +0000   Sat, 27 Dec 2025 09:35:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:36:28 +0000   Sat, 27 Dec 2025 09:35:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:36:28 +0000   Sat, 27 Dec 2025 09:35:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:36:28 +0000   Sat, 27 Dec 2025 09:36:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-963457
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                d821149c-44f6-4337-913a-683907f0e23a
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-wnzhx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-963457                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-7kw8b                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-963457             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-963457    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-grkqs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-963457             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node no-preload-963457 event: Registered Node no-preload-963457 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [fff2aa75330ee7ecdc5040f00dca69b8a7deb56cbd877c1e39b4466e784d52b3] <==
	{"level":"info","ts":"2025-12-27T09:35:53.504695Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T09:35:54.194190Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T09:35:54.194261Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T09:35:54.194307Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-27T09:35:54.194320Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:35:54.194334Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:54.194999Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:54.195025Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:35:54.195040Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:54.195047Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:54.195656Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:35:54.196069Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-963457 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:35:54.196071Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:35:54.196100Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:35:54.196271Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:35:54.196389Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:35:54.196405Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:35:54.196423Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:35:54.196446Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:35:54.196488Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T09:35:54.196601Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T09:35:54.197276Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:35:54.197347Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:35:54.199947Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T09:35:54.200017Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:36:30 up  1:18,  0 user,  load average: 3.29, 3.12, 2.31
	Linux no-preload-963457 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [074575c18e2db9558385b54f628648c80de7f2c6e98b804c14491bb0f9fd015d] <==
	I1227 09:36:06.315237       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:36:06.315537       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 09:36:06.315701       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:36:06.315728       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:36:06.315749       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:36:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:36:06.613842       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:36:06.613912       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:36:06.613927       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:36:06.803931       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:36:07.114388       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:36:07.114422       1 metrics.go:72] Registering metrics
	I1227 09:36:07.114544       1 controller.go:711] "Syncing nftables rules"
	I1227 09:36:16.603869       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:36:16.603931       1 main.go:301] handling current node
	I1227 09:36:26.603945       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:36:26.604001       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dc1cba8c50145f34a761ba3527ab997b8687c7aae6595f982e72c4919e897896] <==
	I1227 09:35:55.198588       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 09:35:55.198620       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	E1227 09:35:55.199457       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1227 09:35:55.204774       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 09:35:55.235449       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:35:55.241180       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 09:35:55.403223       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:35:56.101071       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 09:35:56.105102       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 09:35:56.105118       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:35:56.601555       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:35:56.642471       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:35:56.706451       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 09:35:56.715546       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1227 09:35:56.718665       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:35:56.726040       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:35:57.147186       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:35:57.775816       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:35:57.786594       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 09:35:57.793814       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 09:36:02.804136       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:36:02.809782       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:36:03.000395       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:36:03.148437       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1227 09:36:28.850005       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:38090: use of closed network connection
	
	
	==> kube-controller-manager [b1216b744a6dc29ea731988d23d807a76ad889f6be902d7a01ac65c646084069] <==
	I1227 09:36:01.953263       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.953267       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.953267       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.953288       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.953301       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.953318       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.953727       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.953296       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.953276       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.954289       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.954305       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.954308       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.954309       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.954466       1 range_allocator.go:177] "Sending events to api server"
	I1227 09:36:01.954506       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 09:36:01.954512       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:01.954517       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:01.958638       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:01.963058       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-963457" podCIDRs=["10.244.0.0/24"]
	I1227 09:36:01.964559       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:02.053949       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:02.053968       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:36:02.053974       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:36:02.059614       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:21.957021       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [12e7a8f170846db309d3c90001e2c51a6071aba04a176ec441febb62ae67ee51] <==
	I1227 09:36:03.647059       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:36:03.716201       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:03.817267       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:03.817383       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 09:36:03.818113       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:36:03.844130       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:36:03.844250       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:36:03.851083       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:36:03.851517       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:36:03.851580       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:36:03.852968       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:36:03.853030       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:36:03.853085       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:36:03.853117       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:36:03.853177       1 config.go:200] "Starting service config controller"
	I1227 09:36:03.853191       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:36:03.853511       1 config.go:309] "Starting node config controller"
	I1227 09:36:03.854288       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:36:03.953756       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:36:03.954290       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:36:03.954339       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 09:36:03.954341       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [9d363ebe714348b510ba74803635e64ad487b549064e0fb2f8c3c3543cf116e4] <==
	E1227 09:35:55.155203       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:35:55.155147       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:35:55.155397       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:35:55.155428       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:35:55.155429       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:35:55.155428       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:35:55.155524       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:35:55.155584       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:35:55.155712       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:35:55.155738       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:35:55.155949       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:35:55.156007       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:35:55.156107       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:35:55.156236       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:35:55.156285       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:35:55.156308       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:35:55.156370       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:35:55.983725       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:35:56.072041       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1227 09:35:56.075925       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:35:56.098475       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:35:56.164200       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:35:56.348042       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:35:56.412507       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1227 09:35:58.747021       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:36:03 no-preload-963457 kubelet[2187]: I1227 09:36:03.262587    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f0a5a48-9159-478a-8949-827103a7c85c-lib-modules\") pod \"kube-proxy-grkqs\" (UID: \"6f0a5a48-9159-478a-8949-827103a7c85c\") " pod="kube-system/kube-proxy-grkqs"
	Dec 27 09:36:03 no-preload-963457 kubelet[2187]: I1227 09:36:03.262627    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36ed3b47-67e2-483b-9563-f3366df2f0c5-xtables-lock\") pod \"kindnet-7kw8b\" (UID: \"36ed3b47-67e2-483b-9563-f3366df2f0c5\") " pod="kube-system/kindnet-7kw8b"
	Dec 27 09:36:03 no-preload-963457 kubelet[2187]: I1227 09:36:03.262703    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f0a5a48-9159-478a-8949-827103a7c85c-xtables-lock\") pod \"kube-proxy-grkqs\" (UID: \"6f0a5a48-9159-478a-8949-827103a7c85c\") " pod="kube-system/kube-proxy-grkqs"
	Dec 27 09:36:03 no-preload-963457 kubelet[2187]: I1227 09:36:03.262738    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/36ed3b47-67e2-483b-9563-f3366df2f0c5-cni-cfg\") pod \"kindnet-7kw8b\" (UID: \"36ed3b47-67e2-483b-9563-f3366df2f0c5\") " pod="kube-system/kindnet-7kw8b"
	Dec 27 09:36:03 no-preload-963457 kubelet[2187]: I1227 09:36:03.262762    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrh28\" (UniqueName: \"kubernetes.io/projected/36ed3b47-67e2-483b-9563-f3366df2f0c5-kube-api-access-jrh28\") pod \"kindnet-7kw8b\" (UID: \"36ed3b47-67e2-483b-9563-f3366df2f0c5\") " pod="kube-system/kindnet-7kw8b"
	Dec 27 09:36:04 no-preload-963457 kubelet[2187]: E1227 09:36:04.666332    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-963457" containerName="kube-controller-manager"
	Dec 27 09:36:04 no-preload-963457 kubelet[2187]: I1227 09:36:04.677058    2187 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-grkqs" podStartSLOduration=1.677041029 podStartE2EDuration="1.677041029s" podCreationTimestamp="2025-12-27 09:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:36:03.701639768 +0000 UTC m=+6.164622719" watchObservedRunningTime="2025-12-27 09:36:04.677041029 +0000 UTC m=+7.140023980"
	Dec 27 09:36:05 no-preload-963457 kubelet[2187]: E1227 09:36:05.055465    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-963457" containerName="kube-scheduler"
	Dec 27 09:36:06 no-preload-963457 kubelet[2187]: I1227 09:36:06.705142    2187 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-7kw8b" podStartSLOduration=1.151836167 podStartE2EDuration="3.705112279s" podCreationTimestamp="2025-12-27 09:36:03 +0000 UTC" firstStartedPulling="2025-12-27 09:36:03.490492684 +0000 UTC m=+5.953475616" lastFinishedPulling="2025-12-27 09:36:06.043768782 +0000 UTC m=+8.506751728" observedRunningTime="2025-12-27 09:36:06.704319865 +0000 UTC m=+9.167302828" watchObservedRunningTime="2025-12-27 09:36:06.705112279 +0000 UTC m=+9.168095227"
	Dec 27 09:36:10 no-preload-963457 kubelet[2187]: E1227 09:36:10.471281    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-963457" containerName="kube-apiserver"
	Dec 27 09:36:13 no-preload-963457 kubelet[2187]: E1227 09:36:13.004964    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-963457" containerName="etcd"
	Dec 27 09:36:14 no-preload-963457 kubelet[2187]: E1227 09:36:14.670608    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-963457" containerName="kube-controller-manager"
	Dec 27 09:36:15 no-preload-963457 kubelet[2187]: E1227 09:36:15.058951    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-963457" containerName="kube-scheduler"
	Dec 27 09:36:16 no-preload-963457 kubelet[2187]: I1227 09:36:16.991551    2187 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 09:36:17 no-preload-963457 kubelet[2187]: I1227 09:36:17.061851    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3de40d14-cc88-44f8-a071-caea798ff465-tmp\") pod \"storage-provisioner\" (UID: \"3de40d14-cc88-44f8-a071-caea798ff465\") " pod="kube-system/storage-provisioner"
	Dec 27 09:36:17 no-preload-963457 kubelet[2187]: I1227 09:36:17.061897    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz5b7\" (UniqueName: \"kubernetes.io/projected/3de40d14-cc88-44f8-a071-caea798ff465-kube-api-access-nz5b7\") pod \"storage-provisioner\" (UID: \"3de40d14-cc88-44f8-a071-caea798ff465\") " pod="kube-system/storage-provisioner"
	Dec 27 09:36:17 no-preload-963457 kubelet[2187]: I1227 09:36:17.061929    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2152780b-b980-4e4a-b652-9cd0ec857a2a-config-volume\") pod \"coredns-7d764666f9-wnzhx\" (UID: \"2152780b-b980-4e4a-b652-9cd0ec857a2a\") " pod="kube-system/coredns-7d764666f9-wnzhx"
	Dec 27 09:36:17 no-preload-963457 kubelet[2187]: I1227 09:36:17.062023    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2xxx\" (UniqueName: \"kubernetes.io/projected/2152780b-b980-4e4a-b652-9cd0ec857a2a-kube-api-access-b2xxx\") pod \"coredns-7d764666f9-wnzhx\" (UID: \"2152780b-b980-4e4a-b652-9cd0ec857a2a\") " pod="kube-system/coredns-7d764666f9-wnzhx"
	Dec 27 09:36:17 no-preload-963457 kubelet[2187]: E1227 09:36:17.717529    2187 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wnzhx" containerName="coredns"
	Dec 27 09:36:17 no-preload-963457 kubelet[2187]: I1227 09:36:17.725906    2187 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.725891881999999 podStartE2EDuration="14.725891882s" podCreationTimestamp="2025-12-27 09:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:36:17.725679502 +0000 UTC m=+20.188662453" watchObservedRunningTime="2025-12-27 09:36:17.725891882 +0000 UTC m=+20.188874831"
	Dec 27 09:36:17 no-preload-963457 kubelet[2187]: I1227 09:36:17.735311    2187 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wnzhx" podStartSLOduration=14.73529652 podStartE2EDuration="14.73529652s" podCreationTimestamp="2025-12-27 09:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:36:17.735075187 +0000 UTC m=+20.198058173" watchObservedRunningTime="2025-12-27 09:36:17.73529652 +0000 UTC m=+20.198279471"
	Dec 27 09:36:18 no-preload-963457 kubelet[2187]: E1227 09:36:18.719605    2187 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wnzhx" containerName="coredns"
	Dec 27 09:36:19 no-preload-963457 kubelet[2187]: E1227 09:36:19.722419    2187 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wnzhx" containerName="coredns"
	Dec 27 09:36:19 no-preload-963457 kubelet[2187]: I1227 09:36:19.776764    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn5xf\" (UniqueName: \"kubernetes.io/projected/ad53255a-3a97-45fe-bf01-72e0602f22fa-kube-api-access-rn5xf\") pod \"busybox\" (UID: \"ad53255a-3a97-45fe-bf01-72e0602f22fa\") " pod="default/busybox"
	Dec 27 09:36:22 no-preload-963457 kubelet[2187]: I1227 09:36:22.744969    2187 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.783801296 podStartE2EDuration="3.744949971s" podCreationTimestamp="2025-12-27 09:36:19 +0000 UTC" firstStartedPulling="2025-12-27 09:36:20.054898187 +0000 UTC m=+22.517881130" lastFinishedPulling="2025-12-27 09:36:22.016046857 +0000 UTC m=+24.479029805" observedRunningTime="2025-12-27 09:36:22.744365075 +0000 UTC m=+25.207348023" watchObservedRunningTime="2025-12-27 09:36:22.744949971 +0000 UTC m=+25.207932922"
	
	
	==> storage-provisioner [3f2c818829991201dd86783c83dcfe1c4e545172470da25c067ef60aaa627e50] <==
	I1227 09:36:17.371985       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:36:17.379686       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:36:17.379727       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 09:36:17.381877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:17.385595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:36:17.385765       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:36:17.386002       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-963457_101d8fe2-b6ba-450b-8c68-488eb0e7c4eb!
	I1227 09:36:17.386263       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe1a3181-a427-43a3-94cb-fd67a4c65111", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-963457_101d8fe2-b6ba-450b-8c68-488eb0e7c4eb became leader
	W1227 09:36:17.387599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:17.391630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:36:17.486995       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-963457_101d8fe2-b6ba-450b-8c68-488eb0e7c4eb!
	W1227 09:36:19.395634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:19.399564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:21.402308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:21.406063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:23.409421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:23.413521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:25.418382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:25.423413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:27.427421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:27.432212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:29.436232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:29.441161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963457 -n no-preload-963457
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-963457 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-497722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-497722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (235.396675ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-497722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-497722 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-497722 describe deploy/metrics-server -n kube-system: exit status 1 (56.863912ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-497722 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-497722
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-497722:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288",
	        "Created": "2025-12-27T09:35:53.140774946Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 617669,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:35:53.17715703Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/hostname",
	        "HostsPath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/hosts",
	        "LogPath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288-json.log",
	        "Name": "/default-k8s-diff-port-497722",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-497722:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-497722",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288",
	                "LowerDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-497722",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-497722/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-497722",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-497722",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-497722",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3efa7dbd9810749802b5f22b9d4ccbfa115e1fb5b493385e106f2e36179eaf28",
	            "SandboxKey": "/var/run/docker/netns/3efa7dbd9810",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-497722": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3e6df945c4cd8ddce46059c83cc7bed3e6a73494a8b947e27d2c18ad8eacf919",
	                    "EndpointID": "d03e098fd4fcb8cee153608e2559cd648c4a16b5bbd8329f9e79a42129bdc63b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "4e:1b:fa:97:46:f9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-497722",
	                        "69d33a148b7c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-497722 logs -n 25
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p running-upgrade-561421                                                                                                                                                                                                                     │ running-upgrade-561421       │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:35 UTC │
	│ image   │ test-preload-805186 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                                                                                                                   │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ stop    │ -p test-preload-805186                                                                                                                                                                                                                        │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p test-preload-805186 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p cert-expiration-237269 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-237269       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p cert-expiration-237269                                                                                                                                                                                                                     │ cert-expiration-237269       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-094398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p old-k8s-version-094398 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ image   │ test-preload-805186 image list                                                                                                                                                                                                                │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p test-preload-805186                                                                                                                                                                                                                        │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-917808                                                                                                                                                                                                               │ disable-driver-mounts-917808 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ delete  │ -p stopped-upgrade-196124                                                                                                                                                                                                                     │ stopped-upgrade-196124       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-912564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-912564 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-912564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p no-preload-963457 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-497722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:36:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:36:15.755856  622335 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:36:15.755997  622335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:15.756005  622335 out.go:374] Setting ErrFile to fd 2...
	I1227 09:36:15.756012  622335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:15.756228  622335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:36:15.756685  622335 out.go:368] Setting JSON to false
	I1227 09:36:15.758150  622335 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4720,"bootTime":1766823456,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:36:15.758213  622335 start.go:143] virtualization: kvm guest
	I1227 09:36:15.759939  622335 out.go:179] * [embed-certs-912564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:36:15.761016  622335 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:36:15.761014  622335 notify.go:221] Checking for updates...
	I1227 09:36:15.763382  622335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:36:15.764638  622335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:15.765807  622335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:36:15.766905  622335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:36:15.767909  622335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:36:15.769291  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:15.769895  622335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:36:15.793686  622335 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:36:15.793853  622335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:15.849675  622335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:36:15.839729427 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:15.849769  622335 docker.go:319] overlay module found
	I1227 09:36:15.851438  622335 out.go:179] * Using the docker driver based on existing profile
	I1227 09:36:15.852555  622335 start.go:309] selected driver: docker
	I1227 09:36:15.852572  622335 start.go:928] validating driver "docker" against &{Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:15.852663  622335 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:36:15.853278  622335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:15.905518  622335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:36:15.896501582 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:15.905807  622335 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:15.905858  622335 cni.go:84] Creating CNI manager for ""
	I1227 09:36:15.905926  622335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:15.905973  622335 start.go:353] cluster config:
	{Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:15.907451  622335 out.go:179] * Starting "embed-certs-912564" primary control-plane node in "embed-certs-912564" cluster
	I1227 09:36:15.908326  622335 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:36:15.909241  622335 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:36:15.910102  622335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:15.910131  622335 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:36:15.910156  622335 cache.go:65] Caching tarball of preloaded images
	I1227 09:36:15.910205  622335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:36:15.910262  622335 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:36:15.910273  622335 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:36:15.910379  622335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/config.json ...
	I1227 09:36:15.929803  622335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:36:15.929822  622335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:36:15.929849  622335 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:36:15.929884  622335 start.go:360] acquireMachinesLock for embed-certs-912564: {Name:mk61b0f1dd44336f66b7ae60f44b102943279f72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:36:15.929937  622335 start.go:364] duration metric: took 35.4µs to acquireMachinesLock for "embed-certs-912564"
	I1227 09:36:15.929953  622335 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:36:15.929958  622335 fix.go:54] fixHost starting: 
	I1227 09:36:15.930186  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:15.946167  622335 fix.go:112] recreateIfNeeded on embed-certs-912564: state=Stopped err=<nil>
	W1227 09:36:15.946200  622335 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 09:36:14.163271  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:16.163948  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:13.261954  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:15.262540  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:17.761765  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:36:17.207000  610436 node_ready.go:49] node "no-preload-963457" is "Ready"
	I1227 09:36:17.207025  610436 node_ready.go:38] duration metric: took 13.502511991s for node "no-preload-963457" to be "Ready" ...
	I1227 09:36:17.207039  610436 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:17.207085  610436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:17.219077  610436 api_server.go:72] duration metric: took 13.880363312s to wait for apiserver process to appear ...
	I1227 09:36:17.219099  610436 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:17.219117  610436 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:36:17.224033  610436 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 09:36:17.225019  610436 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:17.225043  610436 api_server.go:131] duration metric: took 5.936968ms to wait for apiserver health ...
	I1227 09:36:17.225053  610436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:17.227917  610436 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:17.227951  610436 system_pods.go:61] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.227958  610436 system_pods.go:61] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.227966  610436 system_pods.go:61] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.227970  610436 system_pods.go:61] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.227978  610436 system_pods.go:61] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.227987  610436 system_pods.go:61] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.227997  610436 system_pods.go:61] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.228004  610436 system_pods.go:61] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.228015  610436 system_pods.go:74] duration metric: took 2.954672ms to wait for pod list to return data ...
	I1227 09:36:17.228026  610436 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:17.230166  610436 default_sa.go:45] found service account: "default"
	I1227 09:36:17.230187  610436 default_sa.go:55] duration metric: took 2.152948ms for default service account to be created ...
	I1227 09:36:17.230195  610436 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:17.232590  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.232614  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.232621  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.232626  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.232629  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.232633  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.232636  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.232639  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.232647  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.232678  610436 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 09:36:17.541732  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.541764  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.541770  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.541776  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.541780  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.541785  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.541815  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.541822  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.541831  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.912221  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.912247  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Running
	I1227 09:36:17.912252  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.912255  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.912259  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.912262  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.912265  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.912269  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.912272  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Running
	I1227 09:36:17.912279  610436 system_pods.go:126] duration metric: took 682.077772ms to wait for k8s-apps to be running ...
	I1227 09:36:17.912286  610436 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:17.912328  610436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:17.925757  610436 system_svc.go:56] duration metric: took 13.459067ms WaitForService to wait for kubelet
	I1227 09:36:17.925808  610436 kubeadm.go:587] duration metric: took 14.587094691s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:17.925832  610436 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:17.928354  610436 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:17.928377  610436 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:17.928390  610436 node_conditions.go:105] duration metric: took 2.552518ms to run NodePressure ...
	I1227 09:36:17.928402  610436 start.go:242] waiting for startup goroutines ...
	I1227 09:36:17.928411  610436 start.go:247] waiting for cluster config update ...
	I1227 09:36:17.928428  610436 start.go:256] writing updated cluster config ...
	I1227 09:36:17.928688  610436 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:17.932505  610436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:18.012128  610436 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.016092  610436 pod_ready.go:94] pod "coredns-7d764666f9-wnzhx" is "Ready"
	I1227 09:36:18.016113  610436 pod_ready.go:86] duration metric: took 3.954033ms for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.018039  610436 pod_ready.go:83] waiting for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.021308  610436 pod_ready.go:94] pod "etcd-no-preload-963457" is "Ready"
	I1227 09:36:18.021328  610436 pod_ready.go:86] duration metric: took 3.271462ms for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.022843  610436 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.025875  610436 pod_ready.go:94] pod "kube-apiserver-no-preload-963457" is "Ready"
	I1227 09:36:18.025892  610436 pod_ready.go:86] duration metric: took 3.027767ms for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.027544  610436 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.336856  610436 pod_ready.go:94] pod "kube-controller-manager-no-preload-963457" is "Ready"
	I1227 09:36:18.336887  610436 pod_ready.go:86] duration metric: took 309.32474ms for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.537212  610436 pod_ready.go:83] waiting for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.936288  610436 pod_ready.go:94] pod "kube-proxy-grkqs" is "Ready"
	I1227 09:36:18.936315  610436 pod_ready.go:86] duration metric: took 399.078348ms for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.137254  610436 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.536512  610436 pod_ready.go:94] pod "kube-scheduler-no-preload-963457" is "Ready"
	I1227 09:36:19.536545  610436 pod_ready.go:86] duration metric: took 399.259363ms for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.536571  610436 pod_ready.go:40] duration metric: took 1.604026487s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:19.582579  610436 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:36:19.584481  610436 out.go:179] * Done! kubectl is now configured to use "no-preload-963457" cluster and "default" namespace by default
	I1227 09:36:15.947788  622335 out.go:252] * Restarting existing docker container for "embed-certs-912564" ...
	I1227 09:36:15.947868  622335 cli_runner.go:164] Run: docker start embed-certs-912564
	I1227 09:36:16.186477  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:16.204808  622335 kic.go:430] container "embed-certs-912564" state is running.
	I1227 09:36:16.205231  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:16.224487  622335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/config.json ...
	I1227 09:36:16.224742  622335 machine.go:94] provisionDockerMachine start ...
	I1227 09:36:16.224849  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:16.243201  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:16.243427  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:16.243440  622335 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:36:16.244129  622335 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57058->127.0.0.1:33453: read: connection reset by peer
	I1227 09:36:19.367696  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-912564
	
	I1227 09:36:19.367723  622335 ubuntu.go:182] provisioning hostname "embed-certs-912564"
	I1227 09:36:19.367814  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.386757  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.387127  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.387150  622335 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-912564 && echo "embed-certs-912564" | sudo tee /etc/hostname
	I1227 09:36:19.522771  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-912564
	
	I1227 09:36:19.522877  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.543038  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.543358  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.543388  622335 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-912564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-912564/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-912564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:36:19.668353  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:36:19.668380  622335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:36:19.668427  622335 ubuntu.go:190] setting up certificates
	I1227 09:36:19.668447  622335 provision.go:84] configureAuth start
	I1227 09:36:19.668529  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:19.689166  622335 provision.go:143] copyHostCerts
	I1227 09:36:19.689233  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:36:19.689256  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:36:19.689339  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:36:19.689483  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:36:19.689499  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:36:19.689545  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:36:19.689664  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:36:19.689673  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:36:19.689711  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:36:19.689881  622335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-912564 san=[127.0.0.1 192.168.94.2 embed-certs-912564 localhost minikube]
	I1227 09:36:19.746663  622335 provision.go:177] copyRemoteCerts
	I1227 09:36:19.746730  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:36:19.746782  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.766272  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:19.858141  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:36:19.876224  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1227 09:36:19.894481  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:36:19.911686  622335 provision.go:87] duration metric: took 243.216642ms to configureAuth
	I1227 09:36:19.911711  622335 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:36:19.911915  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:19.912029  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.930663  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.930962  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.930983  622335 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:36:20.251003  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:36:20.251031  622335 machine.go:97] duration metric: took 4.026272116s to provisionDockerMachine
	I1227 09:36:20.251046  622335 start.go:293] postStartSetup for "embed-certs-912564" (driver="docker")
	I1227 09:36:20.251060  622335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:36:20.251125  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:36:20.251200  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.272340  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.363700  622335 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:36:20.367711  622335 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:36:20.367734  622335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:36:20.367749  622335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:36:20.367820  622335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:36:20.367922  622335 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:36:20.368051  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:36:20.376361  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:20.393895  622335 start.go:296] duration metric: took 142.830385ms for postStartSetup
	I1227 09:36:20.393981  622335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:20.394046  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.412636  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.501303  622335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:36:20.506127  622335 fix.go:56] duration metric: took 4.576160597s for fixHost
	I1227 09:36:20.506154  622335 start.go:83] releasing machines lock for "embed-certs-912564", held for 4.576205681s
	I1227 09:36:20.506231  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:20.526289  622335 ssh_runner.go:195] Run: cat /version.json
	I1227 09:36:20.526337  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.526345  622335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:36:20.526445  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.546473  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.546990  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.635010  622335 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:20.692254  622335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:36:20.729042  622335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:36:20.734159  622335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:36:20.734289  622335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:36:20.742588  622335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:36:20.742612  622335 start.go:496] detecting cgroup driver to use...
	I1227 09:36:20.742656  622335 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:36:20.742708  622335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:36:20.757772  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:36:20.771033  622335 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:36:20.771095  622335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:36:20.785978  622335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:36:20.799169  622335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:36:20.882315  622335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:36:20.965183  622335 docker.go:234] disabling docker service ...
	I1227 09:36:20.965254  622335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:36:20.980266  622335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:36:20.992591  622335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:36:21.074160  622335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:36:21.160689  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:36:21.174204  622335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:36:21.188429  622335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:36:21.188490  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.197653  622335 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:36:21.197706  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.206508  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.215288  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.224635  622335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:36:21.232876  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.241632  622335 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.250258  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.259256  622335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:36:21.267330  622335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:36:21.274844  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:21.357225  622335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:36:21.513416  622335 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:36:21.513491  622335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:36:21.517802  622335 start.go:574] Will wait 60s for crictl version
	I1227 09:36:21.517863  622335 ssh_runner.go:195] Run: which crictl
	I1227 09:36:21.521539  622335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:36:21.547358  622335 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:36:21.547444  622335 ssh_runner.go:195] Run: crio --version
	I1227 09:36:21.578207  622335 ssh_runner.go:195] Run: crio --version
	I1227 09:36:21.609292  622335 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	W1227 09:36:18.663032  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:20.664243  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	I1227 09:36:21.610434  622335 cli_runner.go:164] Run: docker network inspect embed-certs-912564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:21.628243  622335 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1227 09:36:21.632413  622335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:21.642892  622335 kubeadm.go:884] updating cluster {Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:36:21.643006  622335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:21.643062  622335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:21.677448  622335 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:21.677471  622335 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:36:21.677524  622335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:21.703610  622335 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:21.703636  622335 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:36:21.703645  622335 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1227 09:36:21.703772  622335 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-912564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:36:21.703895  622335 ssh_runner.go:195] Run: crio config
	I1227 09:36:21.750305  622335 cni.go:84] Creating CNI manager for ""
	I1227 09:36:21.750333  622335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:21.750350  622335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:36:21.750373  622335 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-912564 NodeName:embed-certs-912564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:36:21.750509  622335 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-912564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:36:21.750578  622335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:36:21.759704  622335 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:36:21.759777  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:36:21.768072  622335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 09:36:21.781002  622335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:36:21.793925  622335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 09:36:21.806305  622335 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:36:21.809898  622335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:21.820032  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:21.920758  622335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:21.960171  622335 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564 for IP: 192.168.94.2
	I1227 09:36:21.960196  622335 certs.go:195] generating shared ca certs ...
	I1227 09:36:21.960231  622335 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:21.960474  622335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:36:21.960554  622335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:36:21.960569  622335 certs.go:257] generating profile certs ...
	I1227 09:36:21.960701  622335 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/client.key
	I1227 09:36:21.960779  622335 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key.6601433b
	I1227 09:36:21.960888  622335 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key
	I1227 09:36:21.961033  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:36:21.961086  622335 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:36:21.961113  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:36:21.961150  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:36:21.961186  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:36:21.961225  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:36:21.961298  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:21.962178  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:36:21.985651  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:36:22.006677  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:36:22.029280  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:36:22.054424  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 09:36:22.077264  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:36:22.095602  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:36:22.113971  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:36:22.131748  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:36:22.149344  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:36:22.167734  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:36:22.187888  622335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:36:22.201027  622335 ssh_runner.go:195] Run: openssl version
	I1227 09:36:22.207221  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.214467  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:36:22.221999  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.226212  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.226259  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.269710  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:36:22.277804  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.285678  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:36:22.293081  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.297054  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.297104  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.331452  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:36:22.339171  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.347116  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:36:22.354513  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.358217  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.358268  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.394066  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:36:22.402772  622335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:36:22.406779  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:36:22.443933  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:36:22.482195  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:36:22.529477  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:36:22.575213  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:36:22.634783  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:36:22.671980  622335 kubeadm.go:401] StartCluster: {Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:22.672057  622335 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:36:22.672116  622335 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:36:22.705162  622335 cri.go:96] found id: "5383d4cdce95af97f9b9e8e07db61c856f19c8db586c179d8ff736a43046829e"
	I1227 09:36:22.705186  622335 cri.go:96] found id: "4073c03ac98fe56856e504a1aa0d5a1748d26e6ce500dc31ad8e91ee49384cd6"
	I1227 09:36:22.705192  622335 cri.go:96] found id: "ba83fd494a8c5a1bc7eb22555934e2b74494963aa284a3786fa73f76c60a9175"
	I1227 09:36:22.705196  622335 cri.go:96] found id: "663c76b88f42532f7c763b6916bdc80252b590b27aa690c8fe09d547aca1eb6c"
	I1227 09:36:22.705205  622335 cri.go:96] found id: ""
	I1227 09:36:22.705250  622335 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:36:22.718695  622335 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:22Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:36:22.718785  622335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:36:22.726975  622335 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:36:22.726995  622335 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:36:22.727046  622335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:36:22.736032  622335 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:36:22.737138  622335 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-912564" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:22.737771  622335 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-912564" cluster setting kubeconfig missing "embed-certs-912564" context setting]
	I1227 09:36:22.738693  622335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.740844  622335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:36:22.750818  622335 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1227 09:36:22.750853  622335 kubeadm.go:602] duration metric: took 23.85154ms to restartPrimaryControlPlane
	I1227 09:36:22.750864  622335 kubeadm.go:403] duration metric: took 78.893214ms to StartCluster
	I1227 09:36:22.750883  622335 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.750952  622335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:22.753086  622335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.753360  622335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:36:22.753437  622335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:36:22.753532  622335 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-912564"
	I1227 09:36:22.753555  622335 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-912564"
	I1227 09:36:22.753556  622335 addons.go:70] Setting dashboard=true in profile "embed-certs-912564"
	W1227 09:36:22.753563  622335 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:36:22.753573  622335 addons.go:239] Setting addon dashboard=true in "embed-certs-912564"
	W1227 09:36:22.753581  622335 addons.go:248] addon dashboard should already be in state true
	I1227 09:36:22.753576  622335 addons.go:70] Setting default-storageclass=true in profile "embed-certs-912564"
	I1227 09:36:22.753593  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.753606  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.753608  622335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-912564"
	I1227 09:36:22.753609  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:22.753938  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.754139  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.754187  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.779956  622335 out.go:179] * Verifying Kubernetes components...
	I1227 09:36:22.780837  622335 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:36:22.780872  622335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:36:22.781059  622335 addons.go:239] Setting addon default-storageclass=true in "embed-certs-912564"
	W1227 09:36:22.781224  622335 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:36:22.781269  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.781577  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:22.781774  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.784055  622335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:22.784074  622335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:36:22.784123  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.785174  622335 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1227 09:36:19.763616  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:22.263259  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:36:22.786204  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:36:22.786220  622335 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:36:22.786279  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.807331  622335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:22.807359  622335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:36:22.807427  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.807672  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.811339  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.843561  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.926282  622335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:22.928069  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:22.937857  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:36:22.937883  622335 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:36:22.940823  622335 node_ready.go:35] waiting up to 6m0s for node "embed-certs-912564" to be "Ready" ...
	I1227 09:36:22.952448  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:36:22.952469  622335 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:36:22.954110  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:22.967202  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:36:22.967226  622335 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:36:22.981928  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:36:22.981953  622335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:36:22.998766  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:36:22.998803  622335 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:36:23.012500  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:36:23.012534  622335 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:36:23.025289  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:36:23.025315  622335 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:36:23.037841  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:36:23.037868  622335 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:36:23.050151  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:36:23.050172  622335 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:36:23.063127  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:36:24.188905  622335 node_ready.go:49] node "embed-certs-912564" is "Ready"
	I1227 09:36:24.188945  622335 node_ready.go:38] duration metric: took 1.248089417s for node "embed-certs-912564" to be "Ready" ...
	I1227 09:36:24.188966  622335 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:24.189025  622335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:24.706854  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.778752913s)
	I1227 09:36:24.706946  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.75280549s)
	I1227 09:36:24.707028  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.643865472s)
	I1227 09:36:24.707081  622335 api_server.go:72] duration metric: took 1.953688648s to wait for apiserver process to appear ...
	I1227 09:36:24.707107  622335 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:24.707132  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:24.708777  622335 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-912564 addons enable metrics-server
	
	I1227 09:36:24.713189  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:36:24.713214  622335 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:36:24.718380  622335 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:36:24.719363  622335 addons.go:530] duration metric: took 1.965938957s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:36:25.207981  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:25.212198  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:36:25.212222  622335 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:36:25.707923  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:25.712098  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1227 09:36:25.712980  622335 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:25.713005  622335 api_server.go:131] duration metric: took 1.005888464s to wait for apiserver health ...
	I1227 09:36:25.713013  622335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:25.716444  622335 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:25.716489  622335 system_pods.go:61] "coredns-7d764666f9-vm5hp" [e07c8612-a077-44b5-b84f-6dda3bc90a64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:25.716503  622335 system_pods.go:61] "etcd-embed-certs-912564" [05ab5aa9-c66f-449d-bb47-6c48d44d1db7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:36:25.716510  622335 system_pods.go:61] "kindnet-bznfn" [73083928-8435-4e2e-913b-ff93fa424106] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:36:25.716517  622335 system_pods.go:61] "kube-apiserver-embed-certs-912564" [16628f2b-e9fa-4772-b8cb-8ef74d603b7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:36:25.716522  622335 system_pods.go:61] "kube-controller-manager-embed-certs-912564" [78314a52-dd39-4d37-9d70-8002b392e928] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:36:25.716532  622335 system_pods.go:61] "kube-proxy-dv8ch" [2a923e9f-87c7-472f-b5b9-506bcdc67cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:36:25.716537  622335 system_pods.go:61] "kube-scheduler-embed-certs-912564" [81aa6d57-9095-411f-b4a0-653e59fccb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:36:25.716542  622335 system_pods.go:61] "storage-provisioner" [af70aaa7-5435-48e3-8275-f12100402980] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:25.716548  622335 system_pods.go:74] duration metric: took 3.528996ms to wait for pod list to return data ...
	I1227 09:36:25.716555  622335 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:25.719151  622335 default_sa.go:45] found service account: "default"
	I1227 09:36:25.719169  622335 default_sa.go:55] duration metric: took 2.608678ms for default service account to be created ...
	I1227 09:36:25.719176  622335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:25.721738  622335 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:25.721763  622335 system_pods.go:89] "coredns-7d764666f9-vm5hp" [e07c8612-a077-44b5-b84f-6dda3bc90a64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:25.721771  622335 system_pods.go:89] "etcd-embed-certs-912564" [05ab5aa9-c66f-449d-bb47-6c48d44d1db7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:36:25.721778  622335 system_pods.go:89] "kindnet-bznfn" [73083928-8435-4e2e-913b-ff93fa424106] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:36:25.721786  622335 system_pods.go:89] "kube-apiserver-embed-certs-912564" [16628f2b-e9fa-4772-b8cb-8ef74d603b7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:36:25.721805  622335 system_pods.go:89] "kube-controller-manager-embed-certs-912564" [78314a52-dd39-4d37-9d70-8002b392e928] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:36:25.721816  622335 system_pods.go:89] "kube-proxy-dv8ch" [2a923e9f-87c7-472f-b5b9-506bcdc67cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:36:25.721831  622335 system_pods.go:89] "kube-scheduler-embed-certs-912564" [81aa6d57-9095-411f-b4a0-653e59fccb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:36:25.721839  622335 system_pods.go:89] "storage-provisioner" [af70aaa7-5435-48e3-8275-f12100402980] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:25.721850  622335 system_pods.go:126] duration metric: took 2.668061ms to wait for k8s-apps to be running ...
	I1227 09:36:25.721859  622335 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:25.721906  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:25.735465  622335 system_svc.go:56] duration metric: took 13.598326ms WaitForService to wait for kubelet
	I1227 09:36:25.735491  622335 kubeadm.go:587] duration metric: took 2.98210021s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:25.735514  622335 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:25.737852  622335 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:25.737873  622335 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:25.737888  622335 node_conditions.go:105] duration metric: took 2.365444ms to run NodePressure ...
	I1227 09:36:25.737899  622335 start.go:242] waiting for startup goroutines ...
	I1227 09:36:25.737908  622335 start.go:247] waiting for cluster config update ...
	I1227 09:36:25.737919  622335 start.go:256] writing updated cluster config ...
	I1227 09:36:25.738145  622335 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:25.741968  622335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:25.745155  622335 pod_ready.go:83] waiting for pod "coredns-7d764666f9-vm5hp" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:36:22.664879  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	I1227 09:36:23.163413  616179 node_ready.go:49] node "default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:23.163451  616179 node_ready.go:38] duration metric: took 13.503160256s for node "default-k8s-diff-port-497722" to be "Ready" ...
	I1227 09:36:23.163470  616179 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:23.163548  616179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:23.178899  616179 api_server.go:72] duration metric: took 13.808108711s to wait for apiserver process to appear ...
	I1227 09:36:23.178931  616179 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:23.178966  616179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1227 09:36:23.183768  616179 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1227 09:36:23.184969  616179 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:23.184994  616179 api_server.go:131] duration metric: took 6.056457ms to wait for apiserver health ...
	I1227 09:36:23.185003  616179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:23.188536  616179 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:23.188575  616179 system_pods.go:61] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.188586  616179 system_pods.go:61] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.188599  616179 system_pods.go:61] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.188605  616179 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.188611  616179 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.188619  616179 system_pods.go:61] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.188624  616179 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.188633  616179 system_pods.go:61] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:23.188641  616179 system_pods.go:74] duration metric: took 3.631884ms to wait for pod list to return data ...
	I1227 09:36:23.188655  616179 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:23.191155  616179 default_sa.go:45] found service account: "default"
	I1227 09:36:23.191180  616179 default_sa.go:55] duration metric: took 2.516479ms for default service account to be created ...
	I1227 09:36:23.191191  616179 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:23.194108  616179 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:23.194138  616179 system_pods.go:89] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.194145  616179 system_pods.go:89] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.194154  616179 system_pods.go:89] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.194160  616179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.194165  616179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.194171  616179 system_pods.go:89] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.194175  616179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.194179  616179 system_pods.go:89] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:23.194203  616179 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 09:36:23.406401  616179 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:23.406445  616179 system_pods.go:89] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.406454  616179 system_pods.go:89] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.406461  616179 system_pods.go:89] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.406467  616179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.406473  616179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.406478  616179 system_pods.go:89] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.406483  616179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.406488  616179 system_pods.go:89] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Running
	I1227 09:36:23.406499  616179 system_pods.go:126] duration metric: took 215.299714ms to wait for k8s-apps to be running ...
	I1227 09:36:23.406513  616179 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:23.406568  616179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:23.422554  616179 system_svc.go:56] duration metric: took 16.02927ms WaitForService to wait for kubelet
	I1227 09:36:23.422585  616179 kubeadm.go:587] duration metric: took 14.05180013s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:23.422606  616179 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:23.425525  616179 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:23.425563  616179 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:23.425603  616179 node_conditions.go:105] duration metric: took 2.990468ms to run NodePressure ...
	I1227 09:36:23.425622  616179 start.go:242] waiting for startup goroutines ...
	I1227 09:36:23.425633  616179 start.go:247] waiting for cluster config update ...
	I1227 09:36:23.425646  616179 start.go:256] writing updated cluster config ...
	I1227 09:36:23.426029  616179 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:23.430159  616179 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:23.433730  616179 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.439386  616179 pod_ready.go:94] pod "coredns-7d764666f9-wfv5r" is "Ready"
	I1227 09:36:24.439414  616179 pod_ready.go:86] duration metric: took 1.005660831s for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.442016  616179 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.445826  616179 pod_ready.go:94] pod "etcd-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.445849  616179 pod_ready.go:86] duration metric: took 3.807307ms for pod "etcd-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.447851  616179 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.451495  616179 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.451514  616179 pod_ready.go:86] duration metric: took 3.640701ms for pod "kube-apiserver-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.453395  616179 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.637337  616179 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.637370  616179 pod_ready.go:86] duration metric: took 183.957443ms for pod "kube-controller-manager-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.837272  616179 pod_ready.go:83] waiting for pod "kube-proxy-6z4vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.238026  616179 pod_ready.go:94] pod "kube-proxy-6z4vt" is "Ready"
	I1227 09:36:25.238052  616179 pod_ready.go:86] duration metric: took 400.752514ms for pod "kube-proxy-6z4vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.437923  616179 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.837173  616179 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:25.837207  616179 pod_ready.go:86] duration metric: took 399.25682ms for pod "kube-scheduler-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.837234  616179 pod_ready.go:40] duration metric: took 2.40703441s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:25.883242  616179 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:36:25.886195  616179 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-497722" cluster and "default" namespace by default
	W1227 09:36:24.265838  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:26.761645  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:28.763118  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:36:29.267372  613189 pod_ready.go:94] pod "coredns-5dd5756b68-l2f7v" is "Ready"
	I1227 09:36:29.267400  613189 pod_ready.go:86] duration metric: took 39.010575903s for pod "coredns-5dd5756b68-l2f7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.271701  613189 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.276827  613189 pod_ready.go:94] pod "etcd-old-k8s-version-094398" is "Ready"
	I1227 09:36:29.276854  613189 pod_ready.go:86] duration metric: took 5.125471ms for pod "etcd-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.280131  613189 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.285165  613189 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-094398" is "Ready"
	I1227 09:36:29.285233  613189 pod_ready.go:86] duration metric: took 5.074304ms for pod "kube-apiserver-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.288829  613189 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.464310  613189 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-094398" is "Ready"
	I1227 09:36:29.464343  613189 pod_ready.go:86] duration metric: took 175.492277ms for pod "kube-controller-manager-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.664360  613189 pod_ready.go:83] waiting for pod "kube-proxy-w8h4h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.063146  613189 pod_ready.go:94] pod "kube-proxy-w8h4h" is "Ready"
	I1227 09:36:30.063178  613189 pod_ready.go:86] duration metric: took 398.787394ms for pod "kube-proxy-w8h4h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.264144  613189 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.663989  613189 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-094398" is "Ready"
	I1227 09:36:30.664030  613189 pod_ready.go:86] duration metric: took 399.855087ms for pod "kube-scheduler-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.664045  613189 pod_ready.go:40] duration metric: took 40.412649094s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:30.710355  613189 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1227 09:36:30.713249  613189 out.go:203] 
	W1227 09:36:30.714603  613189 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 09:36:30.715969  613189 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 09:36:30.717115  613189 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-094398" cluster and "default" namespace by default
	W1227 09:36:27.750662  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:29.752327  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:32.251754  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:34.327887  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 09:36:23 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:23.206551418Z" level=info msg="Starting container: 927f2e18013b19d8c9ca65ad94c6f7537f94d49a422fedc49aec54d3c7849749" id=06e4f1a3-1758-4daa-945a-34fc84afe934 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:23 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:23.208465518Z" level=info msg="Started container" PID=1892 containerID=927f2e18013b19d8c9ca65ad94c6f7537f94d49a422fedc49aec54d3c7849749 description=kube-system/coredns-7d764666f9-wfv5r/coredns id=06e4f1a3-1758-4daa-945a-34fc84afe934 name=/runtime.v1.RuntimeService/StartContainer sandboxID=27858730ff5f26ffe02dc934fa81a7c29fa30a518c7e6a79f6c0daee677dca6e
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.33311453Z" level=info msg="Running pod sandbox: default/busybox/POD" id=998a4b80-3852-4ea3-afdf-258066239c56 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.333202953Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.338698145Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:740cf9f488e8711140ed62a816eafefdac1ac0bcf85c8fce949afe674e14a8a7 UID:e562f138-f6d6-49b7-a50f-2f3d20604171 NetNS:/var/run/netns/24705413-2ad5-42a3-953a-a5d9c6016f95 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002fe518}] Aliases:map[]}"
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.338730058Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.348101401Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:740cf9f488e8711140ed62a816eafefdac1ac0bcf85c8fce949afe674e14a8a7 UID:e562f138-f6d6-49b7-a50f-2f3d20604171 NetNS:/var/run/netns/24705413-2ad5-42a3-953a-a5d9c6016f95 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002fe518}] Aliases:map[]}"
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.348236804Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.34897694Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.349724852Z" level=info msg="Ran pod sandbox 740cf9f488e8711140ed62a816eafefdac1ac0bcf85c8fce949afe674e14a8a7 with infra container: default/busybox/POD" id=998a4b80-3852-4ea3-afdf-258066239c56 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.350971015Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8283c21-7b34-478a-8c26-895e21393208 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.351098157Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f8283c21-7b34-478a-8c26-895e21393208 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.351147085Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f8283c21-7b34-478a-8c26-895e21393208 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.351871019Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c256a9df-03e3-4d03-8d5b-049db219a259 name=/runtime.v1.ImageService/PullImage
	Dec 27 09:36:26 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:26.353445415Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 09:36:28 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:28.213953154Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c256a9df-03e3-4d03-8d5b-049db219a259 name=/runtime.v1.ImageService/PullImage
	Dec 27 09:36:28 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:28.214572036Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=34495020-10ff-4b50-976e-d324ca06c853 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:28 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:28.216249315Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=22895cdc-5223-4181-bea8-6eb40b38d0e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:28 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:28.219877831Z" level=info msg="Creating container: default/busybox/busybox" id=232a9b53-f053-4911-b940-1ba747afc96a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:28 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:28.219978529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:28 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:28.223547361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:28 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:28.223992382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:28 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:28.261548041Z" level=info msg="Created container cdaaa62ea21a79c51e97b2edd589ba3493a0733450142f1758964ef06b5c86a6: default/busybox/busybox" id=232a9b53-f053-4911-b940-1ba747afc96a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:28 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:28.262176676Z" level=info msg="Starting container: cdaaa62ea21a79c51e97b2edd589ba3493a0733450142f1758964ef06b5c86a6" id=e0afb207-5f7c-4e50-a947-3d483253c8c4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:28 default-k8s-diff-port-497722 crio[777]: time="2025-12-27T09:36:28.263847807Z" level=info msg="Started container" PID=1969 containerID=cdaaa62ea21a79c51e97b2edd589ba3493a0733450142f1758964ef06b5c86a6 description=default/busybox/busybox id=e0afb207-5f7c-4e50-a947-3d483253c8c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=740cf9f488e8711140ed62a816eafefdac1ac0bcf85c8fce949afe674e14a8a7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	cdaaa62ea21a7       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   740cf9f488e87       busybox                                                default
	927f2e18013b1       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      14 seconds ago      Running             coredns                   0                   27858730ff5f2       coredns-7d764666f9-wfv5r                               kube-system
	0303eee82a488       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   a01deee255f1a       storage-provisioner                                    kube-system
	4cb445ffce93b       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    25 seconds ago      Running             kindnet-cni               0                   b7751a56b429d       kindnet-rd4dj                                          kube-system
	388f3a45465eb       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      27 seconds ago      Running             kube-proxy                0                   4bc6380d35432       kube-proxy-6z4vt                                       kube-system
	1efdecbd217c0       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      37 seconds ago      Running             kube-scheduler            0                   d5c30e0c49134       kube-scheduler-default-k8s-diff-port-497722            kube-system
	5cf25b46db4c7       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      37 seconds ago      Running             kube-controller-manager   0                   18aeebe4e4731       kube-controller-manager-default-k8s-diff-port-497722   kube-system
	8cf34d4124a2d       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      37 seconds ago      Running             kube-apiserver            0                   73ab0589cc2ad       kube-apiserver-default-k8s-diff-port-497722            kube-system
	4a087c7c5857c       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      37 seconds ago      Running             etcd                      0                   658782192d6cb       etcd-default-k8s-diff-port-497722                      kube-system
	
	
	==> coredns [927f2e18013b19d8c9ca65ad94c6f7537f94d49a422fedc49aec54d3c7849749] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:57824 - 55355 "HINFO IN 1185448364297940099.882701424460790155. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.017082381s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-497722
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-497722
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=default-k8s-diff-port-497722
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_36_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:36:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-497722
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:36:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:36:34 +0000   Sat, 27 Dec 2025 09:36:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:36:34 +0000   Sat, 27 Dec 2025 09:36:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:36:34 +0000   Sat, 27 Dec 2025 09:36:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:36:34 +0000   Sat, 27 Dec 2025 09:36:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-497722
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                e2fdc3b1-3a68-4551-be95-6955cffc1d64
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-wfv5r                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-default-k8s-diff-port-497722                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-rd4dj                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-497722             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-497722    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-6z4vt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-497722             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node default-k8s-diff-port-497722 event: Registered Node default-k8s-diff-port-497722 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [4a087c7c5857c85ac9e4796a3279080feb690c0e65be78f133f1ba4b689c4f2a] <==
	{"level":"info","ts":"2025-12-27T09:36:00.189217Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T09:36:00.581306Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T09:36:00.581351Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T09:36:00.581418Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-27T09:36:00.581436Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:36:00.581452Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:00.581938Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:00.581959Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:36:00.581985Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:00.581993Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:00.582471Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:36:00.582991Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:36:00.582996Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-497722 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:36:00.583013Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:36:00.583052Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:36:00.583147Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:36:00.583182Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:36:00.583221Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T09:36:00.583350Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T09:36:00.583376Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:36:00.583393Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:36:00.584350Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:36:00.584464Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:36:00.587241Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:36:00.587604Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 09:36:37 up  1:19,  0 user,  load average: 3.43, 3.15, 2.32
	Linux default-k8s-diff-port-497722 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4cb445ffce93b8f9efc28e53951b50069b5e57eddf11ddc35c990429211f1716] <==
	I1227 09:36:12.221866       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:36:12.222153       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1227 09:36:12.222281       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:36:12.222299       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:36:12.222319       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:36:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:36:12.424752       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:36:12.424844       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:36:12.424858       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:36:12.425293       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:36:12.672279       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:36:12.672301       1 metrics.go:72] Registering metrics
	I1227 09:36:12.672353       1 controller.go:711] "Syncing nftables rules"
	I1227 09:36:22.424877       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 09:36:22.424951       1 main.go:301] handling current node
	I1227 09:36:32.424770       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 09:36:32.424863       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8cf34d4124a2dbc0138140fa251cd6e1517ea12d700f0706ce8731ef34ef5c29] <==
	I1227 09:36:01.640808       1 policy_source.go:248] refreshing policies
	E1227 09:36:01.674241       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1227 09:36:01.722279       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:36:01.724511       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 09:36:01.724662       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:36:01.730190       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:36:01.817500       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:36:02.525584       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 09:36:02.530025       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 09:36:02.530044       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:36:03.050654       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:36:03.088308       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:36:03.229892       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 09:36:03.236233       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1227 09:36:03.237424       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:36:03.241785       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:36:03.560173       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:36:04.282654       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:36:04.308295       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 09:36:04.320064       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 09:36:09.109541       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:36:09.112920       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:36:09.460495       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:36:09.510351       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1227 09:36:36.114961       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:33888: use of closed network connection
	
	
	==> kube-controller-manager [5cf25b46db4c74b84da2e62e19283ca1f428c9993b96868db29d01631d70f365] <==
	I1227 09:36:08.363516       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.363669       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-497722"
	I1227 09:36:08.363718       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 09:36:08.363523       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.364657       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.364682       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.364701       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.364706       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.364824       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.365035       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.365468       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.365525       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.365711       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.365749       1 range_allocator.go:177] "Sending events to api server"
	I1227 09:36:08.365776       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 09:36:08.365807       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:08.365814       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.370549       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.371134       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:08.375584       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-497722" podCIDRs=["10.244.0.0/24"]
	I1227 09:36:08.466131       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:08.466167       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:36:08.466182       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:36:08.472275       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:23.366077       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [388f3a45465ebc913853cfc29dcfc3ad4a9b8b98022dbfca48d1daef703ea070] <==
	I1227 09:36:09.921681       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:36:09.997323       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:10.098470       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:10.098515       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1227 09:36:10.098624       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:36:10.116916       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:36:10.116969       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:36:10.122007       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:36:10.122414       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:36:10.122432       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:36:10.124533       1 config.go:200] "Starting service config controller"
	I1227 09:36:10.124555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:36:10.124576       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:36:10.124582       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:36:10.124618       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:36:10.124627       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:36:10.125113       1 config.go:309] "Starting node config controller"
	I1227 09:36:10.125138       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:36:10.125146       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:36:10.225242       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:36:10.225246       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:36:10.225242       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1efdecbd217c0588585800a0fc760c3f8949e2393dc8d2b9e7053fa772985e57] <==
	E1227 09:36:01.571634       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:36:01.571673       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:36:01.571729       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:36:01.571742       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:36:01.571766       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:36:01.571918       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:36:01.571926       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:36:01.571967       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:36:01.572036       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:36:02.389744       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:36:02.409345       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:36:02.413621       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:36:02.419759       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:36:02.497318       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:36:02.500342       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:36:02.514948       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:36:02.538390       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:36:02.542420       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:36:02.547772       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:36:02.580078       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:36:02.594678       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:36:02.644161       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1227 09:36:02.661642       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:36:02.864033       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	I1227 09:36:04.766864       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:36:09 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:09.590500    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25c2458e-d68a-488d-803c-80e0c6191bad-lib-modules\") pod \"kube-proxy-6z4vt\" (UID: \"25c2458e-d68a-488d-803c-80e0c6191bad\") " pod="kube-system/kube-proxy-6z4vt"
	Dec 27 09:36:09 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:09.590636    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxc25\" (UniqueName: \"kubernetes.io/projected/c9a44ecf-3860-4022-bcbe-25cfdb86502a-kube-api-access-sxc25\") pod \"kindnet-rd4dj\" (UID: \"c9a44ecf-3860-4022-bcbe-25cfdb86502a\") " pod="kube-system/kindnet-rd4dj"
	Dec 27 09:36:09 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:09.590704    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c9a44ecf-3860-4022-bcbe-25cfdb86502a-cni-cfg\") pod \"kindnet-rd4dj\" (UID: \"c9a44ecf-3860-4022-bcbe-25cfdb86502a\") " pod="kube-system/kindnet-rd4dj"
	Dec 27 09:36:09 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:09.590726    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9a44ecf-3860-4022-bcbe-25cfdb86502a-xtables-lock\") pod \"kindnet-rd4dj\" (UID: \"c9a44ecf-3860-4022-bcbe-25cfdb86502a\") " pod="kube-system/kindnet-rd4dj"
	Dec 27 09:36:09 default-k8s-diff-port-497722 kubelet[1310]: E1227 09:36:09.813922    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-497722" containerName="kube-apiserver"
	Dec 27 09:36:10 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:10.206074    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-6z4vt" podStartSLOduration=1.206051483 podStartE2EDuration="1.206051483s" podCreationTimestamp="2025-12-27 09:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:36:10.205923993 +0000 UTC m=+6.134876252" watchObservedRunningTime="2025-12-27 09:36:10.206051483 +0000 UTC m=+6.135003722"
	Dec 27 09:36:12 default-k8s-diff-port-497722 kubelet[1310]: E1227 09:36:12.227986    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-497722" containerName="etcd"
	Dec 27 09:36:12 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:12.236279    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-rd4dj" podStartSLOduration=1.080306792 podStartE2EDuration="3.236259277s" podCreationTimestamp="2025-12-27 09:36:09 +0000 UTC" firstStartedPulling="2025-12-27 09:36:09.848831136 +0000 UTC m=+5.777783366" lastFinishedPulling="2025-12-27 09:36:12.004783614 +0000 UTC m=+7.933735851" observedRunningTime="2025-12-27 09:36:12.211522089 +0000 UTC m=+8.140474329" watchObservedRunningTime="2025-12-27 09:36:12.236259277 +0000 UTC m=+8.165211514"
	Dec 27 09:36:15 default-k8s-diff-port-497722 kubelet[1310]: E1227 09:36:15.458960    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-497722" containerName="kube-scheduler"
	Dec 27 09:36:16 default-k8s-diff-port-497722 kubelet[1310]: E1227 09:36:16.213393    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-497722" containerName="kube-scheduler"
	Dec 27 09:36:17 default-k8s-diff-port-497722 kubelet[1310]: E1227 09:36:17.135326    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-497722" containerName="kube-controller-manager"
	Dec 27 09:36:19 default-k8s-diff-port-497722 kubelet[1310]: E1227 09:36:19.818744    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-497722" containerName="kube-apiserver"
	Dec 27 09:36:22 default-k8s-diff-port-497722 kubelet[1310]: E1227 09:36:22.229713    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-497722" containerName="etcd"
	Dec 27 09:36:22 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:22.810877    1310 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 09:36:22 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:22.880551    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c84aab32-9d34-4b1d-a3ee-813926808b75-tmp\") pod \"storage-provisioner\" (UID: \"c84aab32-9d34-4b1d-a3ee-813926808b75\") " pod="kube-system/storage-provisioner"
	Dec 27 09:36:22 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:22.880756    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl4xl\" (UniqueName: \"kubernetes.io/projected/c9445108-899a-4589-9501-4ffa7cd80a43-kube-api-access-wl4xl\") pod \"coredns-7d764666f9-wfv5r\" (UID: \"c9445108-899a-4589-9501-4ffa7cd80a43\") " pod="kube-system/coredns-7d764666f9-wfv5r"
	Dec 27 09:36:22 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:22.880990    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxmgp\" (UniqueName: \"kubernetes.io/projected/c84aab32-9d34-4b1d-a3ee-813926808b75-kube-api-access-hxmgp\") pod \"storage-provisioner\" (UID: \"c84aab32-9d34-4b1d-a3ee-813926808b75\") " pod="kube-system/storage-provisioner"
	Dec 27 09:36:22 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:22.881024    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9445108-899a-4589-9501-4ffa7cd80a43-config-volume\") pod \"coredns-7d764666f9-wfv5r\" (UID: \"c9445108-899a-4589-9501-4ffa7cd80a43\") " pod="kube-system/coredns-7d764666f9-wfv5r"
	Dec 27 09:36:23 default-k8s-diff-port-497722 kubelet[1310]: E1227 09:36:23.229907    1310 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfv5r" containerName="coredns"
	Dec 27 09:36:23 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:23.243097    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wfv5r" podStartSLOduration=14.243077511 podStartE2EDuration="14.243077511s" podCreationTimestamp="2025-12-27 09:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:36:23.242894382 +0000 UTC m=+19.171846624" watchObservedRunningTime="2025-12-27 09:36:23.243077511 +0000 UTC m=+19.172029769"
	Dec 27 09:36:23 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:23.255331    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.255310879 podStartE2EDuration="14.255310879s" podCreationTimestamp="2025-12-27 09:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:36:23.255128636 +0000 UTC m=+19.184080872" watchObservedRunningTime="2025-12-27 09:36:23.255310879 +0000 UTC m=+19.184263116"
	Dec 27 09:36:24 default-k8s-diff-port-497722 kubelet[1310]: E1227 09:36:24.234435    1310 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfv5r" containerName="coredns"
	Dec 27 09:36:25 default-k8s-diff-port-497722 kubelet[1310]: E1227 09:36:25.236830    1310 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfv5r" containerName="coredns"
	Dec 27 09:36:26 default-k8s-diff-port-497722 kubelet[1310]: I1227 09:36:26.104250    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgwqg\" (UniqueName: \"kubernetes.io/projected/e562f138-f6d6-49b7-a50f-2f3d20604171-kube-api-access-rgwqg\") pod \"busybox\" (UID: \"e562f138-f6d6-49b7-a50f-2f3d20604171\") " pod="default/busybox"
	Dec 27 09:36:36 default-k8s-diff-port-497722 kubelet[1310]: E1227 09:36:36.114923    1310 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51186->127.0.0.1:38213: write tcp 127.0.0.1:51186->127.0.0.1:38213: write: broken pipe
	
	
	==> storage-provisioner [0303eee82a488753668ca293a96c2c4ed07162c4aab3a5190f30cc3c4e6755a5] <==
	I1227 09:36:23.212696       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:36:23.220975       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:36:23.221018       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 09:36:23.222954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:23.227876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:36:23.228034       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:36:23.228196       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb43f7f4-07db-4bad-82fa-044874eea265", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-497722_f54ffdef-ab0f-4dca-8cd3-b0102ce1a432 became leader
	I1227 09:36:23.228311       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-497722_f54ffdef-ab0f-4dca-8cd3-b0102ce1a432!
	W1227 09:36:23.230324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:23.235924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:36:23.328920       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-497722_f54ffdef-ab0f-4dca-8cd3-b0102ce1a432!
	W1227 09:36:25.239330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:25.243368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:27.245959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:27.250456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:29.254147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:29.263532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:31.267886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:31.272376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:33.276449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:33.282268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:35.285606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:35.289282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:37.292384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:37.296613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-497722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-094398 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-094398 --alsologtostderr -v=1: exit status 80 (2.379752163s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-094398 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:36:42.440616  627413 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:36:42.440734  627413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:42.440746  627413 out.go:374] Setting ErrFile to fd 2...
	I1227 09:36:42.440751  627413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:42.440954  627413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:36:42.441200  627413 out.go:368] Setting JSON to false
	I1227 09:36:42.441219  627413 mustload.go:66] Loading cluster: old-k8s-version-094398
	I1227 09:36:42.441546  627413 config.go:182] Loaded profile config "old-k8s-version-094398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 09:36:42.441949  627413 cli_runner.go:164] Run: docker container inspect old-k8s-version-094398 --format={{.State.Status}}
	I1227 09:36:42.459695  627413 host.go:66] Checking if "old-k8s-version-094398" exists ...
	I1227 09:36:42.459954  627413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:42.516393  627413 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:98 OomKillDisable:false NGoroutines:96 SystemTime:2025-12-27 09:36:42.505947888 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:42.517036  627413 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-094398 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 09:36:42.518641  627413 out.go:179] * Pausing node old-k8s-version-094398 ... 
	I1227 09:36:42.519577  627413 host.go:66] Checking if "old-k8s-version-094398" exists ...
	I1227 09:36:42.519911  627413 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:42.519978  627413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-094398
	I1227 09:36:42.537657  627413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/old-k8s-version-094398/id_rsa Username:docker}
	I1227 09:36:42.626346  627413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:42.647997  627413 pause.go:52] kubelet running: true
	I1227 09:36:42.648068  627413 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:36:42.798566  627413 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:36:42.798684  627413 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:36:42.863741  627413 cri.go:96] found id: "2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be"
	I1227 09:36:42.863765  627413 cri.go:96] found id: "4ca19515ea39c3a7dcba93a0c272b27d0f41a8aed7ed1b0673487d1d34e4a94e"
	I1227 09:36:42.863770  627413 cri.go:96] found id: "6f2897ce522ad46f87485549185bcbb88c4bb30c132de49b4d738824df1f3616"
	I1227 09:36:42.863776  627413 cri.go:96] found id: "35bf241ea4f56cd57193378e8b7210de26f0ac1767ec286d23632fe9931444cc"
	I1227 09:36:42.863780  627413 cri.go:96] found id: "784231ee46284fb40820138878c1485b83721a6aee020dd20705f6b5990a1df6"
	I1227 09:36:42.863785  627413 cri.go:96] found id: "c5fc71eb798e5068dea2558184fc2f1324dfde7c0fb1d8eb63ec2e35afe24f87"
	I1227 09:36:42.863803  627413 cri.go:96] found id: "e5693fba043840c8a3d1117c5220be21e1cfd4a801e563c4512c5828ce4adbcd"
	I1227 09:36:42.863808  627413 cri.go:96] found id: "8223e42bf97a022ceca335fd381b6b6c3aeac7c607125c2e1cbf3b803876c7ad"
	I1227 09:36:42.863812  627413 cri.go:96] found id: "9f6ac155f6a42bf5c5966c64219e0b13256e12d474b8bdc4feb2e3846eeca31d"
	I1227 09:36:42.863821  627413 cri.go:96] found id: "e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a"
	I1227 09:36:42.863827  627413 cri.go:96] found id: "226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965"
	I1227 09:36:42.863830  627413 cri.go:96] found id: ""
	I1227 09:36:42.863870  627413 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:36:42.875456  627413 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:42Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:36:43.193991  627413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:43.207101  627413 pause.go:52] kubelet running: false
	I1227 09:36:43.207152  627413 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:36:43.348187  627413 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:36:43.348278  627413 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:36:43.412393  627413 cri.go:96] found id: "2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be"
	I1227 09:36:43.412413  627413 cri.go:96] found id: "4ca19515ea39c3a7dcba93a0c272b27d0f41a8aed7ed1b0673487d1d34e4a94e"
	I1227 09:36:43.412417  627413 cri.go:96] found id: "6f2897ce522ad46f87485549185bcbb88c4bb30c132de49b4d738824df1f3616"
	I1227 09:36:43.412420  627413 cri.go:96] found id: "35bf241ea4f56cd57193378e8b7210de26f0ac1767ec286d23632fe9931444cc"
	I1227 09:36:43.412423  627413 cri.go:96] found id: "784231ee46284fb40820138878c1485b83721a6aee020dd20705f6b5990a1df6"
	I1227 09:36:43.412427  627413 cri.go:96] found id: "c5fc71eb798e5068dea2558184fc2f1324dfde7c0fb1d8eb63ec2e35afe24f87"
	I1227 09:36:43.412445  627413 cri.go:96] found id: "e5693fba043840c8a3d1117c5220be21e1cfd4a801e563c4512c5828ce4adbcd"
	I1227 09:36:43.412451  627413 cri.go:96] found id: "8223e42bf97a022ceca335fd381b6b6c3aeac7c607125c2e1cbf3b803876c7ad"
	I1227 09:36:43.412455  627413 cri.go:96] found id: "9f6ac155f6a42bf5c5966c64219e0b13256e12d474b8bdc4feb2e3846eeca31d"
	I1227 09:36:43.412465  627413 cri.go:96] found id: "e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a"
	I1227 09:36:43.412473  627413 cri.go:96] found id: "226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965"
	I1227 09:36:43.412478  627413 cri.go:96] found id: ""
	I1227 09:36:43.412526  627413 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:36:43.926852  627413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:43.940538  627413 pause.go:52] kubelet running: false
	I1227 09:36:43.940595  627413 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:36:44.085758  627413 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:36:44.085864  627413 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:36:44.152334  627413 cri.go:96] found id: "2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be"
	I1227 09:36:44.152363  627413 cri.go:96] found id: "4ca19515ea39c3a7dcba93a0c272b27d0f41a8aed7ed1b0673487d1d34e4a94e"
	I1227 09:36:44.152370  627413 cri.go:96] found id: "6f2897ce522ad46f87485549185bcbb88c4bb30c132de49b4d738824df1f3616"
	I1227 09:36:44.152374  627413 cri.go:96] found id: "35bf241ea4f56cd57193378e8b7210de26f0ac1767ec286d23632fe9931444cc"
	I1227 09:36:44.152379  627413 cri.go:96] found id: "784231ee46284fb40820138878c1485b83721a6aee020dd20705f6b5990a1df6"
	I1227 09:36:44.152384  627413 cri.go:96] found id: "c5fc71eb798e5068dea2558184fc2f1324dfde7c0fb1d8eb63ec2e35afe24f87"
	I1227 09:36:44.152389  627413 cri.go:96] found id: "e5693fba043840c8a3d1117c5220be21e1cfd4a801e563c4512c5828ce4adbcd"
	I1227 09:36:44.152393  627413 cri.go:96] found id: "8223e42bf97a022ceca335fd381b6b6c3aeac7c607125c2e1cbf3b803876c7ad"
	I1227 09:36:44.152397  627413 cri.go:96] found id: "9f6ac155f6a42bf5c5966c64219e0b13256e12d474b8bdc4feb2e3846eeca31d"
	I1227 09:36:44.152405  627413 cri.go:96] found id: "e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a"
	I1227 09:36:44.152410  627413 cri.go:96] found id: "226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965"
	I1227 09:36:44.152421  627413 cri.go:96] found id: ""
	I1227 09:36:44.152473  627413 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:36:44.528343  627413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:44.540941  627413 pause.go:52] kubelet running: false
	I1227 09:36:44.541017  627413 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:36:44.679102  627413 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:36:44.679184  627413 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:36:44.743411  627413 cri.go:96] found id: "2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be"
	I1227 09:36:44.743440  627413 cri.go:96] found id: "4ca19515ea39c3a7dcba93a0c272b27d0f41a8aed7ed1b0673487d1d34e4a94e"
	I1227 09:36:44.743445  627413 cri.go:96] found id: "6f2897ce522ad46f87485549185bcbb88c4bb30c132de49b4d738824df1f3616"
	I1227 09:36:44.743448  627413 cri.go:96] found id: "35bf241ea4f56cd57193378e8b7210de26f0ac1767ec286d23632fe9931444cc"
	I1227 09:36:44.743452  627413 cri.go:96] found id: "784231ee46284fb40820138878c1485b83721a6aee020dd20705f6b5990a1df6"
	I1227 09:36:44.743458  627413 cri.go:96] found id: "c5fc71eb798e5068dea2558184fc2f1324dfde7c0fb1d8eb63ec2e35afe24f87"
	I1227 09:36:44.743463  627413 cri.go:96] found id: "e5693fba043840c8a3d1117c5220be21e1cfd4a801e563c4512c5828ce4adbcd"
	I1227 09:36:44.743467  627413 cri.go:96] found id: "8223e42bf97a022ceca335fd381b6b6c3aeac7c607125c2e1cbf3b803876c7ad"
	I1227 09:36:44.743472  627413 cri.go:96] found id: "9f6ac155f6a42bf5c5966c64219e0b13256e12d474b8bdc4feb2e3846eeca31d"
	I1227 09:36:44.743481  627413 cri.go:96] found id: "e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a"
	I1227 09:36:44.743485  627413 cri.go:96] found id: "226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965"
	I1227 09:36:44.743490  627413 cri.go:96] found id: ""
	I1227 09:36:44.743529  627413 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:36:44.757181  627413 out.go:203] 
	W1227 09:36:44.758379  627413 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:36:44.758397  627413 out.go:285] * 
	* 
	W1227 09:36:44.760615  627413 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:36:44.761591  627413 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-094398 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-094398
helpers_test.go:244: (dbg) docker inspect old-k8s-version-094398:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509",
	        "Created": "2025-12-27T09:34:24.619442272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 613548,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:35:38.646408352Z",
	            "FinishedAt": "2025-12-27T09:35:37.352198603Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/hostname",
	        "HostsPath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/hosts",
	        "LogPath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509-json.log",
	        "Name": "/old-k8s-version-094398",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-094398:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-094398",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509",
	                "LowerDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-094398",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-094398/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-094398",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-094398",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-094398",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8dd9e3ac46221dc7da71bb433435896ac50996510e66f4be65bb45c511ffc16f",
	            "SandboxKey": "/var/run/docker/netns/8dd9e3ac4622",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-094398": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ba531636d5bcd256e5e0c5cc00963300e9aa97dfe2c1fb4eb178390cd3a90b6",
	                    "EndpointID": "465a187f719232940f3c0decf7840a30e9227b4e721ca2f723bf02d6582378e1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ea:35:8a:38:e3:f1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-094398",
	                        "bfa8d511275e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094398 -n old-k8s-version-094398
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094398 -n old-k8s-version-094398: exit status 2 (310.035753ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-094398 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-094398 logs -n 25: (1.017021034s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p test-preload-805186                                                                                                                                                                                                                        │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p test-preload-805186 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p cert-expiration-237269 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-237269       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p cert-expiration-237269                                                                                                                                                                                                                     │ cert-expiration-237269       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-094398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p old-k8s-version-094398 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ image   │ test-preload-805186 image list                                                                                                                                                                                                                │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p test-preload-805186                                                                                                                                                                                                                        │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-917808                                                                                                                                                                                                               │ disable-driver-mounts-917808 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ delete  │ -p stopped-upgrade-196124                                                                                                                                                                                                                     │ stopped-upgrade-196124       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-912564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-912564 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-912564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p no-preload-963457 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-497722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-497722 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ image   │ old-k8s-version-094398 image list --format=json                                                                                                                                                                                               │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ pause   │ -p old-k8s-version-094398 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:36:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:36:15.755856  622335 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:36:15.755997  622335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:15.756005  622335 out.go:374] Setting ErrFile to fd 2...
	I1227 09:36:15.756012  622335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:15.756228  622335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:36:15.756685  622335 out.go:368] Setting JSON to false
	I1227 09:36:15.758150  622335 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4720,"bootTime":1766823456,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:36:15.758213  622335 start.go:143] virtualization: kvm guest
	I1227 09:36:15.759939  622335 out.go:179] * [embed-certs-912564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:36:15.761016  622335 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:36:15.761014  622335 notify.go:221] Checking for updates...
	I1227 09:36:15.763382  622335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:36:15.764638  622335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:15.765807  622335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:36:15.766905  622335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:36:15.767909  622335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:36:15.769291  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:15.769895  622335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:36:15.793686  622335 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:36:15.793853  622335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:15.849675  622335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:36:15.839729427 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:15.849769  622335 docker.go:319] overlay module found
	I1227 09:36:15.851438  622335 out.go:179] * Using the docker driver based on existing profile
	I1227 09:36:15.852555  622335 start.go:309] selected driver: docker
	I1227 09:36:15.852572  622335 start.go:928] validating driver "docker" against &{Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:15.852663  622335 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:36:15.853278  622335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:15.905518  622335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:36:15.896501582 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:15.905807  622335 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:15.905858  622335 cni.go:84] Creating CNI manager for ""
	I1227 09:36:15.905926  622335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:15.905973  622335 start.go:353] cluster config:
	{Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:15.907451  622335 out.go:179] * Starting "embed-certs-912564" primary control-plane node in "embed-certs-912564" cluster
	I1227 09:36:15.908326  622335 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:36:15.909241  622335 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:36:15.910102  622335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:15.910131  622335 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:36:15.910156  622335 cache.go:65] Caching tarball of preloaded images
	I1227 09:36:15.910205  622335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:36:15.910262  622335 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:36:15.910273  622335 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:36:15.910379  622335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/config.json ...
	I1227 09:36:15.929803  622335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:36:15.929822  622335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:36:15.929849  622335 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:36:15.929884  622335 start.go:360] acquireMachinesLock for embed-certs-912564: {Name:mk61b0f1dd44336f66b7ae60f44b102943279f72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:36:15.929937  622335 start.go:364] duration metric: took 35.4µs to acquireMachinesLock for "embed-certs-912564"
	I1227 09:36:15.929953  622335 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:36:15.929958  622335 fix.go:54] fixHost starting: 
	I1227 09:36:15.930186  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:15.946167  622335 fix.go:112] recreateIfNeeded on embed-certs-912564: state=Stopped err=<nil>
	W1227 09:36:15.946200  622335 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 09:36:14.163271  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:16.163948  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:13.261954  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:15.262540  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:17.761765  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:36:17.207000  610436 node_ready.go:49] node "no-preload-963457" is "Ready"
	I1227 09:36:17.207025  610436 node_ready.go:38] duration metric: took 13.502511991s for node "no-preload-963457" to be "Ready" ...
	I1227 09:36:17.207039  610436 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:17.207085  610436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:17.219077  610436 api_server.go:72] duration metric: took 13.880363312s to wait for apiserver process to appear ...
	I1227 09:36:17.219099  610436 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:17.219117  610436 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:36:17.224033  610436 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 09:36:17.225019  610436 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:17.225043  610436 api_server.go:131] duration metric: took 5.936968ms to wait for apiserver health ...
	I1227 09:36:17.225053  610436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:17.227917  610436 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:17.227951  610436 system_pods.go:61] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.227958  610436 system_pods.go:61] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.227966  610436 system_pods.go:61] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.227970  610436 system_pods.go:61] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.227978  610436 system_pods.go:61] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.227987  610436 system_pods.go:61] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.227997  610436 system_pods.go:61] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.228004  610436 system_pods.go:61] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.228015  610436 system_pods.go:74] duration metric: took 2.954672ms to wait for pod list to return data ...
	I1227 09:36:17.228026  610436 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:17.230166  610436 default_sa.go:45] found service account: "default"
	I1227 09:36:17.230187  610436 default_sa.go:55] duration metric: took 2.152948ms for default service account to be created ...
	I1227 09:36:17.230195  610436 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:17.232590  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.232614  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.232621  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.232626  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.232629  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.232633  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.232636  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.232639  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.232647  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.232678  610436 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 09:36:17.541732  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.541764  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.541770  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.541776  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.541780  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.541785  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.541815  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.541822  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.541831  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.912221  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.912247  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Running
	I1227 09:36:17.912252  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.912255  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.912259  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.912262  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.912265  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.912269  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.912272  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Running
	I1227 09:36:17.912279  610436 system_pods.go:126] duration metric: took 682.077772ms to wait for k8s-apps to be running ...
	I1227 09:36:17.912286  610436 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:17.912328  610436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:17.925757  610436 system_svc.go:56] duration metric: took 13.459067ms WaitForService to wait for kubelet
	I1227 09:36:17.925808  610436 kubeadm.go:587] duration metric: took 14.587094691s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:17.925832  610436 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:17.928354  610436 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:17.928377  610436 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:17.928390  610436 node_conditions.go:105] duration metric: took 2.552518ms to run NodePressure ...
	I1227 09:36:17.928402  610436 start.go:242] waiting for startup goroutines ...
	I1227 09:36:17.928411  610436 start.go:247] waiting for cluster config update ...
	I1227 09:36:17.928428  610436 start.go:256] writing updated cluster config ...
	I1227 09:36:17.928688  610436 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:17.932505  610436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:18.012128  610436 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.016092  610436 pod_ready.go:94] pod "coredns-7d764666f9-wnzhx" is "Ready"
	I1227 09:36:18.016113  610436 pod_ready.go:86] duration metric: took 3.954033ms for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.018039  610436 pod_ready.go:83] waiting for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.021308  610436 pod_ready.go:94] pod "etcd-no-preload-963457" is "Ready"
	I1227 09:36:18.021328  610436 pod_ready.go:86] duration metric: took 3.271462ms for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.022843  610436 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.025875  610436 pod_ready.go:94] pod "kube-apiserver-no-preload-963457" is "Ready"
	I1227 09:36:18.025892  610436 pod_ready.go:86] duration metric: took 3.027767ms for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.027544  610436 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.336856  610436 pod_ready.go:94] pod "kube-controller-manager-no-preload-963457" is "Ready"
	I1227 09:36:18.336887  610436 pod_ready.go:86] duration metric: took 309.32474ms for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.537212  610436 pod_ready.go:83] waiting for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.936288  610436 pod_ready.go:94] pod "kube-proxy-grkqs" is "Ready"
	I1227 09:36:18.936315  610436 pod_ready.go:86] duration metric: took 399.078348ms for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.137254  610436 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.536512  610436 pod_ready.go:94] pod "kube-scheduler-no-preload-963457" is "Ready"
	I1227 09:36:19.536545  610436 pod_ready.go:86] duration metric: took 399.259363ms for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.536571  610436 pod_ready.go:40] duration metric: took 1.604026487s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:19.582579  610436 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:36:19.584481  610436 out.go:179] * Done! kubectl is now configured to use "no-preload-963457" cluster and "default" namespace by default
	I1227 09:36:15.947788  622335 out.go:252] * Restarting existing docker container for "embed-certs-912564" ...
	I1227 09:36:15.947868  622335 cli_runner.go:164] Run: docker start embed-certs-912564
	I1227 09:36:16.186477  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:16.204808  622335 kic.go:430] container "embed-certs-912564" state is running.
	I1227 09:36:16.205231  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:16.224487  622335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/config.json ...
	I1227 09:36:16.224742  622335 machine.go:94] provisionDockerMachine start ...
	I1227 09:36:16.224849  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:16.243201  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:16.243427  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:16.243440  622335 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:36:16.244129  622335 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57058->127.0.0.1:33453: read: connection reset by peer
	I1227 09:36:19.367696  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-912564
	
	I1227 09:36:19.367723  622335 ubuntu.go:182] provisioning hostname "embed-certs-912564"
	I1227 09:36:19.367814  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.386757  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.387127  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.387150  622335 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-912564 && echo "embed-certs-912564" | sudo tee /etc/hostname
	I1227 09:36:19.522771  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-912564
	
	I1227 09:36:19.522877  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.543038  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.543358  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.543388  622335 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-912564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-912564/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-912564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:36:19.668353  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:36:19.668380  622335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:36:19.668427  622335 ubuntu.go:190] setting up certificates
	I1227 09:36:19.668447  622335 provision.go:84] configureAuth start
	I1227 09:36:19.668529  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:19.689166  622335 provision.go:143] copyHostCerts
	I1227 09:36:19.689233  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:36:19.689256  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:36:19.689339  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:36:19.689483  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:36:19.689499  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:36:19.689545  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:36:19.689664  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:36:19.689673  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:36:19.689711  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:36:19.689881  622335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-912564 san=[127.0.0.1 192.168.94.2 embed-certs-912564 localhost minikube]
	I1227 09:36:19.746663  622335 provision.go:177] copyRemoteCerts
	I1227 09:36:19.746730  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:36:19.746782  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.766272  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:19.858141  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:36:19.876224  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1227 09:36:19.894481  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:36:19.911686  622335 provision.go:87] duration metric: took 243.216642ms to configureAuth
	I1227 09:36:19.911711  622335 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:36:19.911915  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:19.912029  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.930663  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.930962  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.930983  622335 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:36:20.251003  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:36:20.251031  622335 machine.go:97] duration metric: took 4.026272116s to provisionDockerMachine
	I1227 09:36:20.251046  622335 start.go:293] postStartSetup for "embed-certs-912564" (driver="docker")
	I1227 09:36:20.251060  622335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:36:20.251125  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:36:20.251200  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.272340  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.363700  622335 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:36:20.367711  622335 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:36:20.367734  622335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:36:20.367749  622335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:36:20.367820  622335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:36:20.367922  622335 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:36:20.368051  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:36:20.376361  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:20.393895  622335 start.go:296] duration metric: took 142.830385ms for postStartSetup
	I1227 09:36:20.393981  622335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:20.394046  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.412636  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.501303  622335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:36:20.506127  622335 fix.go:56] duration metric: took 4.576160597s for fixHost
	I1227 09:36:20.506154  622335 start.go:83] releasing machines lock for "embed-certs-912564", held for 4.576205681s
	I1227 09:36:20.506231  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:20.526289  622335 ssh_runner.go:195] Run: cat /version.json
	I1227 09:36:20.526337  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.526345  622335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:36:20.526445  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.546473  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.546990  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.635010  622335 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:20.692254  622335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:36:20.729042  622335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:36:20.734159  622335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:36:20.734289  622335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:36:20.742588  622335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:36:20.742612  622335 start.go:496] detecting cgroup driver to use...
	I1227 09:36:20.742656  622335 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:36:20.742708  622335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:36:20.757772  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:36:20.771033  622335 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:36:20.771095  622335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:36:20.785978  622335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:36:20.799169  622335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:36:20.882315  622335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:36:20.965183  622335 docker.go:234] disabling docker service ...
	I1227 09:36:20.965254  622335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:36:20.980266  622335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:36:20.992591  622335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:36:21.074160  622335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:36:21.160689  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:36:21.174204  622335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:36:21.188429  622335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:36:21.188490  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.197653  622335 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:36:21.197706  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.206508  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.215288  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.224635  622335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:36:21.232876  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.241632  622335 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.250258  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.259256  622335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:36:21.267330  622335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:36:21.274844  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:21.357225  622335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:36:21.513416  622335 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:36:21.513491  622335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:36:21.517802  622335 start.go:574] Will wait 60s for crictl version
	I1227 09:36:21.517863  622335 ssh_runner.go:195] Run: which crictl
	I1227 09:36:21.521539  622335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:36:21.547358  622335 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:36:21.547444  622335 ssh_runner.go:195] Run: crio --version
	I1227 09:36:21.578207  622335 ssh_runner.go:195] Run: crio --version
	I1227 09:36:21.609292  622335 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	W1227 09:36:18.663032  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:20.664243  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	I1227 09:36:21.610434  622335 cli_runner.go:164] Run: docker network inspect embed-certs-912564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:21.628243  622335 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1227 09:36:21.632413  622335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:21.642892  622335 kubeadm.go:884] updating cluster {Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:36:21.643006  622335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:21.643062  622335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:21.677448  622335 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:21.677471  622335 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:36:21.677524  622335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:21.703610  622335 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:21.703636  622335 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:36:21.703645  622335 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1227 09:36:21.703772  622335 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-912564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:36:21.703895  622335 ssh_runner.go:195] Run: crio config
	I1227 09:36:21.750305  622335 cni.go:84] Creating CNI manager for ""
	I1227 09:36:21.750333  622335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:21.750350  622335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:36:21.750373  622335 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-912564 NodeName:embed-certs-912564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:36:21.750509  622335 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-912564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:36:21.750578  622335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:36:21.759704  622335 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:36:21.759777  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:36:21.768072  622335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 09:36:21.781002  622335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:36:21.793925  622335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 09:36:21.806305  622335 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:36:21.809898  622335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:21.820032  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:21.920758  622335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:21.960171  622335 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564 for IP: 192.168.94.2
	I1227 09:36:21.960196  622335 certs.go:195] generating shared ca certs ...
	I1227 09:36:21.960231  622335 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:21.960474  622335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:36:21.960554  622335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:36:21.960569  622335 certs.go:257] generating profile certs ...
	I1227 09:36:21.960701  622335 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/client.key
	I1227 09:36:21.960779  622335 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key.6601433b
	I1227 09:36:21.960888  622335 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key
	I1227 09:36:21.961033  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:36:21.961086  622335 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:36:21.961113  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:36:21.961150  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:36:21.961186  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:36:21.961225  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:36:21.961298  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:21.962178  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:36:21.985651  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:36:22.006677  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:36:22.029280  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:36:22.054424  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 09:36:22.077264  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:36:22.095602  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:36:22.113971  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:36:22.131748  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:36:22.149344  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:36:22.167734  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:36:22.187888  622335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:36:22.201027  622335 ssh_runner.go:195] Run: openssl version
	I1227 09:36:22.207221  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.214467  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:36:22.221999  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.226212  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.226259  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.269710  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:36:22.277804  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.285678  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:36:22.293081  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.297054  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.297104  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.331452  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:36:22.339171  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.347116  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:36:22.354513  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.358217  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.358268  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.394066  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:36:22.402772  622335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:36:22.406779  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:36:22.443933  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:36:22.482195  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:36:22.529477  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:36:22.575213  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:36:22.634783  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:36:22.671980  622335 kubeadm.go:401] StartCluster: {Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:22.672057  622335 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:36:22.672116  622335 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:36:22.705162  622335 cri.go:96] found id: "5383d4cdce95af97f9b9e8e07db61c856f19c8db586c179d8ff736a43046829e"
	I1227 09:36:22.705186  622335 cri.go:96] found id: "4073c03ac98fe56856e504a1aa0d5a1748d26e6ce500dc31ad8e91ee49384cd6"
	I1227 09:36:22.705192  622335 cri.go:96] found id: "ba83fd494a8c5a1bc7eb22555934e2b74494963aa284a3786fa73f76c60a9175"
	I1227 09:36:22.705196  622335 cri.go:96] found id: "663c76b88f42532f7c763b6916bdc80252b590b27aa690c8fe09d547aca1eb6c"
	I1227 09:36:22.705205  622335 cri.go:96] found id: ""
	I1227 09:36:22.705250  622335 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:36:22.718695  622335 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:22Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:36:22.718785  622335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:36:22.726975  622335 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:36:22.726995  622335 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:36:22.727046  622335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:36:22.736032  622335 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:36:22.737138  622335 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-912564" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:22.737771  622335 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-912564" cluster setting kubeconfig missing "embed-certs-912564" context setting]
	I1227 09:36:22.738693  622335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.740844  622335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:36:22.750818  622335 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1227 09:36:22.750853  622335 kubeadm.go:602] duration metric: took 23.85154ms to restartPrimaryControlPlane
	I1227 09:36:22.750864  622335 kubeadm.go:403] duration metric: took 78.893214ms to StartCluster
	I1227 09:36:22.750883  622335 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.750952  622335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:22.753086  622335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.753360  622335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:36:22.753437  622335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:36:22.753532  622335 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-912564"
	I1227 09:36:22.753555  622335 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-912564"
	I1227 09:36:22.753556  622335 addons.go:70] Setting dashboard=true in profile "embed-certs-912564"
	W1227 09:36:22.753563  622335 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:36:22.753573  622335 addons.go:239] Setting addon dashboard=true in "embed-certs-912564"
	W1227 09:36:22.753581  622335 addons.go:248] addon dashboard should already be in state true
	I1227 09:36:22.753576  622335 addons.go:70] Setting default-storageclass=true in profile "embed-certs-912564"
	I1227 09:36:22.753593  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.753606  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.753608  622335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-912564"
	I1227 09:36:22.753609  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:22.753938  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.754139  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.754187  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.779956  622335 out.go:179] * Verifying Kubernetes components...
	I1227 09:36:22.780837  622335 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:36:22.780872  622335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:36:22.781059  622335 addons.go:239] Setting addon default-storageclass=true in "embed-certs-912564"
	W1227 09:36:22.781224  622335 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:36:22.781269  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.781577  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:22.781774  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.784055  622335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:22.784074  622335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:36:22.784123  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.785174  622335 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1227 09:36:19.763616  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:22.263259  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:36:22.786204  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:36:22.786220  622335 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:36:22.786279  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.807331  622335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:22.807359  622335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:36:22.807427  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.807672  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.811339  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.843561  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.926282  622335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:22.928069  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:22.937857  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:36:22.937883  622335 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:36:22.940823  622335 node_ready.go:35] waiting up to 6m0s for node "embed-certs-912564" to be "Ready" ...
	I1227 09:36:22.952448  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:36:22.952469  622335 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:36:22.954110  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:22.967202  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:36:22.967226  622335 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:36:22.981928  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:36:22.981953  622335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:36:22.998766  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:36:22.998803  622335 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:36:23.012500  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:36:23.012534  622335 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:36:23.025289  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:36:23.025315  622335 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:36:23.037841  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:36:23.037868  622335 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:36:23.050151  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:36:23.050172  622335 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:36:23.063127  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:36:24.188905  622335 node_ready.go:49] node "embed-certs-912564" is "Ready"
	I1227 09:36:24.188945  622335 node_ready.go:38] duration metric: took 1.248089417s for node "embed-certs-912564" to be "Ready" ...
	I1227 09:36:24.188966  622335 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:24.189025  622335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:24.706854  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.778752913s)
	I1227 09:36:24.706946  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.75280549s)
	I1227 09:36:24.707028  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.643865472s)
	I1227 09:36:24.707081  622335 api_server.go:72] duration metric: took 1.953688648s to wait for apiserver process to appear ...
	I1227 09:36:24.707107  622335 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:24.707132  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:24.708777  622335 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-912564 addons enable metrics-server
	
	I1227 09:36:24.713189  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:36:24.713214  622335 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:36:24.718380  622335 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:36:24.719363  622335 addons.go:530] duration metric: took 1.965938957s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:36:25.207981  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:25.212198  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:36:25.212222  622335 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:36:25.707923  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:25.712098  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1227 09:36:25.712980  622335 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:25.713005  622335 api_server.go:131] duration metric: took 1.005888464s to wait for apiserver health ...
	I1227 09:36:25.713013  622335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:25.716444  622335 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:25.716489  622335 system_pods.go:61] "coredns-7d764666f9-vm5hp" [e07c8612-a077-44b5-b84f-6dda3bc90a64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:25.716503  622335 system_pods.go:61] "etcd-embed-certs-912564" [05ab5aa9-c66f-449d-bb47-6c48d44d1db7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:36:25.716510  622335 system_pods.go:61] "kindnet-bznfn" [73083928-8435-4e2e-913b-ff93fa424106] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:36:25.716517  622335 system_pods.go:61] "kube-apiserver-embed-certs-912564" [16628f2b-e9fa-4772-b8cb-8ef74d603b7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:36:25.716522  622335 system_pods.go:61] "kube-controller-manager-embed-certs-912564" [78314a52-dd39-4d37-9d70-8002b392e928] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:36:25.716532  622335 system_pods.go:61] "kube-proxy-dv8ch" [2a923e9f-87c7-472f-b5b9-506bcdc67cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:36:25.716537  622335 system_pods.go:61] "kube-scheduler-embed-certs-912564" [81aa6d57-9095-411f-b4a0-653e59fccb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:36:25.716542  622335 system_pods.go:61] "storage-provisioner" [af70aaa7-5435-48e3-8275-f12100402980] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:25.716548  622335 system_pods.go:74] duration metric: took 3.528996ms to wait for pod list to return data ...
	I1227 09:36:25.716555  622335 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:25.719151  622335 default_sa.go:45] found service account: "default"
	I1227 09:36:25.719169  622335 default_sa.go:55] duration metric: took 2.608678ms for default service account to be created ...
	I1227 09:36:25.719176  622335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:25.721738  622335 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:25.721763  622335 system_pods.go:89] "coredns-7d764666f9-vm5hp" [e07c8612-a077-44b5-b84f-6dda3bc90a64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:25.721771  622335 system_pods.go:89] "etcd-embed-certs-912564" [05ab5aa9-c66f-449d-bb47-6c48d44d1db7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:36:25.721778  622335 system_pods.go:89] "kindnet-bznfn" [73083928-8435-4e2e-913b-ff93fa424106] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:36:25.721786  622335 system_pods.go:89] "kube-apiserver-embed-certs-912564" [16628f2b-e9fa-4772-b8cb-8ef74d603b7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:36:25.721805  622335 system_pods.go:89] "kube-controller-manager-embed-certs-912564" [78314a52-dd39-4d37-9d70-8002b392e928] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:36:25.721816  622335 system_pods.go:89] "kube-proxy-dv8ch" [2a923e9f-87c7-472f-b5b9-506bcdc67cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:36:25.721831  622335 system_pods.go:89] "kube-scheduler-embed-certs-912564" [81aa6d57-9095-411f-b4a0-653e59fccb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:36:25.721839  622335 system_pods.go:89] "storage-provisioner" [af70aaa7-5435-48e3-8275-f12100402980] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:25.721850  622335 system_pods.go:126] duration metric: took 2.668061ms to wait for k8s-apps to be running ...
	I1227 09:36:25.721859  622335 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:25.721906  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:25.735465  622335 system_svc.go:56] duration metric: took 13.598326ms WaitForService to wait for kubelet
	I1227 09:36:25.735491  622335 kubeadm.go:587] duration metric: took 2.98210021s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:25.735514  622335 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:25.737852  622335 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:25.737873  622335 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:25.737888  622335 node_conditions.go:105] duration metric: took 2.365444ms to run NodePressure ...
	I1227 09:36:25.737899  622335 start.go:242] waiting for startup goroutines ...
	I1227 09:36:25.737908  622335 start.go:247] waiting for cluster config update ...
	I1227 09:36:25.737919  622335 start.go:256] writing updated cluster config ...
	I1227 09:36:25.738145  622335 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:25.741968  622335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:25.745155  622335 pod_ready.go:83] waiting for pod "coredns-7d764666f9-vm5hp" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:36:22.664879  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	I1227 09:36:23.163413  616179 node_ready.go:49] node "default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:23.163451  616179 node_ready.go:38] duration metric: took 13.503160256s for node "default-k8s-diff-port-497722" to be "Ready" ...
	I1227 09:36:23.163470  616179 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:23.163548  616179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:23.178899  616179 api_server.go:72] duration metric: took 13.808108711s to wait for apiserver process to appear ...
	I1227 09:36:23.178931  616179 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:23.178966  616179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1227 09:36:23.183768  616179 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1227 09:36:23.184969  616179 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:23.184994  616179 api_server.go:131] duration metric: took 6.056457ms to wait for apiserver health ...
	I1227 09:36:23.185003  616179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:23.188536  616179 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:23.188575  616179 system_pods.go:61] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.188586  616179 system_pods.go:61] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.188599  616179 system_pods.go:61] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.188605  616179 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.188611  616179 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.188619  616179 system_pods.go:61] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.188624  616179 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.188633  616179 system_pods.go:61] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:23.188641  616179 system_pods.go:74] duration metric: took 3.631884ms to wait for pod list to return data ...
	I1227 09:36:23.188655  616179 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:23.191155  616179 default_sa.go:45] found service account: "default"
	I1227 09:36:23.191180  616179 default_sa.go:55] duration metric: took 2.516479ms for default service account to be created ...
	I1227 09:36:23.191191  616179 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:23.194108  616179 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:23.194138  616179 system_pods.go:89] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.194145  616179 system_pods.go:89] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.194154  616179 system_pods.go:89] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.194160  616179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.194165  616179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.194171  616179 system_pods.go:89] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.194175  616179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.194179  616179 system_pods.go:89] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:23.194203  616179 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 09:36:23.406401  616179 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:23.406445  616179 system_pods.go:89] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.406454  616179 system_pods.go:89] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.406461  616179 system_pods.go:89] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.406467  616179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.406473  616179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.406478  616179 system_pods.go:89] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.406483  616179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.406488  616179 system_pods.go:89] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Running
	I1227 09:36:23.406499  616179 system_pods.go:126] duration metric: took 215.299714ms to wait for k8s-apps to be running ...
	I1227 09:36:23.406513  616179 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:23.406568  616179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:23.422554  616179 system_svc.go:56] duration metric: took 16.02927ms WaitForService to wait for kubelet
	I1227 09:36:23.422585  616179 kubeadm.go:587] duration metric: took 14.05180013s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:23.422606  616179 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:23.425525  616179 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:23.425563  616179 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:23.425603  616179 node_conditions.go:105] duration metric: took 2.990468ms to run NodePressure ...
	I1227 09:36:23.425622  616179 start.go:242] waiting for startup goroutines ...
	I1227 09:36:23.425633  616179 start.go:247] waiting for cluster config update ...
	I1227 09:36:23.425646  616179 start.go:256] writing updated cluster config ...
	I1227 09:36:23.426029  616179 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:23.430159  616179 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:23.433730  616179 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.439386  616179 pod_ready.go:94] pod "coredns-7d764666f9-wfv5r" is "Ready"
	I1227 09:36:24.439414  616179 pod_ready.go:86] duration metric: took 1.005660831s for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.442016  616179 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.445826  616179 pod_ready.go:94] pod "etcd-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.445849  616179 pod_ready.go:86] duration metric: took 3.807307ms for pod "etcd-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.447851  616179 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.451495  616179 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.451514  616179 pod_ready.go:86] duration metric: took 3.640701ms for pod "kube-apiserver-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.453395  616179 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.637337  616179 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.637370  616179 pod_ready.go:86] duration metric: took 183.957443ms for pod "kube-controller-manager-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.837272  616179 pod_ready.go:83] waiting for pod "kube-proxy-6z4vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.238026  616179 pod_ready.go:94] pod "kube-proxy-6z4vt" is "Ready"
	I1227 09:36:25.238052  616179 pod_ready.go:86] duration metric: took 400.752514ms for pod "kube-proxy-6z4vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.437923  616179 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.837173  616179 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:25.837207  616179 pod_ready.go:86] duration metric: took 399.25682ms for pod "kube-scheduler-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.837234  616179 pod_ready.go:40] duration metric: took 2.40703441s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:25.883242  616179 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:36:25.886195  616179 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-497722" cluster and "default" namespace by default
	W1227 09:36:24.265838  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:26.761645  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:28.763118  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:36:29.267372  613189 pod_ready.go:94] pod "coredns-5dd5756b68-l2f7v" is "Ready"
	I1227 09:36:29.267400  613189 pod_ready.go:86] duration metric: took 39.010575903s for pod "coredns-5dd5756b68-l2f7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.271701  613189 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.276827  613189 pod_ready.go:94] pod "etcd-old-k8s-version-094398" is "Ready"
	I1227 09:36:29.276854  613189 pod_ready.go:86] duration metric: took 5.125471ms for pod "etcd-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.280131  613189 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.285165  613189 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-094398" is "Ready"
	I1227 09:36:29.285233  613189 pod_ready.go:86] duration metric: took 5.074304ms for pod "kube-apiserver-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.288829  613189 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.464310  613189 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-094398" is "Ready"
	I1227 09:36:29.464343  613189 pod_ready.go:86] duration metric: took 175.492277ms for pod "kube-controller-manager-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.664360  613189 pod_ready.go:83] waiting for pod "kube-proxy-w8h4h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.063146  613189 pod_ready.go:94] pod "kube-proxy-w8h4h" is "Ready"
	I1227 09:36:30.063178  613189 pod_ready.go:86] duration metric: took 398.787394ms for pod "kube-proxy-w8h4h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.264144  613189 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.663989  613189 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-094398" is "Ready"
	I1227 09:36:30.664030  613189 pod_ready.go:86] duration metric: took 399.855087ms for pod "kube-scheduler-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.664045  613189 pod_ready.go:40] duration metric: took 40.412649094s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:30.710355  613189 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1227 09:36:30.713249  613189 out.go:203] 
	W1227 09:36:30.714603  613189 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 09:36:30.715969  613189 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 09:36:30.717115  613189 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-094398" cluster and "default" namespace by default
	W1227 09:36:27.750662  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:29.752327  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:32.251754  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:34.327887  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:36.751189  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:39.251134  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 09:36:08 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:08.061188144Z" level=info msg="Created container 226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jv6d/kubernetes-dashboard" id=faee3fee-1943-48f9-b200-23a2db4082aa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:08 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:08.061674832Z" level=info msg="Starting container: 226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965" id=57e9101c-6a7f-4e32-ad7d-d277dc4f41fc name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:08 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:08.06332045Z" level=info msg="Started container" PID=1729 containerID=226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jv6d/kubernetes-dashboard id=57e9101c-6a7f-4e32-ad7d-d277dc4f41fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=9e55c04dd102e348cfa24aae353b2626b5dc33cd608dbbb9064beb6639febf31
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.083243576Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=48f22a67-b88c-483f-aff9-75038bde454e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.084182551Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=98969ea1-00df-47bb-ac1f-cd1b009c3620 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.085162176Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=dafc2881-b7fc-4efb-8345-711bcf3e2552 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.085302002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.089275067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.089422222Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/452ed8e95a5819759de91ef7b1f62129fe64a168aac652898c88417b693ceb0c/merged/etc/passwd: no such file or directory"
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.089445982Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/452ed8e95a5819759de91ef7b1f62129fe64a168aac652898c88417b693ceb0c/merged/etc/group: no such file or directory"
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.089654808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.107442029Z" level=info msg="Created container 2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be: kube-system/storage-provisioner/storage-provisioner" id=dafc2881-b7fc-4efb-8345-711bcf3e2552 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.10808415Z" level=info msg="Starting container: 2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be" id=db4e3b1b-2783-49f6-a6cf-1b10dd76df77 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.109989891Z" level=info msg="Started container" PID=1755 containerID=2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be description=kube-system/storage-provisioner/storage-provisioner id=db4e3b1b-2783-49f6-a6cf-1b10dd76df77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d59a3e614ede54f9dc6f5940addd9ab4fd1aa4d04ed10015b8ec9cea64c3f94f
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.955895586Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3ec6a62-f9cf-4145-841c-de30c6163485 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.956886271Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f7c7276a-55f5-427a-b8d2-21a8d70af59a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.957914894Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx/dashboard-metrics-scraper" id=e6476923-fb69-4afe-aaf0-3cf3ed7106ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.958067203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.963540072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.964139334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.98956236Z" level=info msg="Created container e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx/dashboard-metrics-scraper" id=e6476923-fb69-4afe-aaf0-3cf3ed7106ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.990175047Z" level=info msg="Starting container: e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a" id=649f1c18-1700-44f6-9cd7-934895851e92 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.99204128Z" level=info msg="Started container" PID=1771 containerID=e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx/dashboard-metrics-scraper id=649f1c18-1700-44f6-9cd7-934895851e92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=08f15750dc8398826d19b2b27d6f98622ddbad642016bccfdfbb7d38b46f7081
	Dec 27 09:36:27 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:27.10423379Z" level=info msg="Removing container: 1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52" id=f85bac27-dfe0-494e-b6cf-46c8c036ae0e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:36:27 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:27.113660297Z" level=info msg="Removed container 1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx/dashboard-metrics-scraper" id=f85bac27-dfe0-494e-b6cf-46c8c036ae0e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e515e63f9478f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   08f15750dc839       dashboard-metrics-scraper-5f989dc9cf-vv6xx       kubernetes-dashboard
	2613f18335698       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   d59a3e614ede5       storage-provisioner                              kube-system
	226e6694486c7       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   9e55c04dd102e       kubernetes-dashboard-8694d4445c-5jv6d            kubernetes-dashboard
	4ca19515ea39c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           56 seconds ago      Running             coredns                     0                   874369c605ecc       coredns-5dd5756b68-l2f7v                         kube-system
	a2edc52c4d7b9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   2bdbb562f4ba8       busybox                                          default
	6f2897ce522ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   d59a3e614ede5       storage-provisioner                              kube-system
	35bf241ea4f56       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           56 seconds ago      Running             kindnet-cni                 0                   794b80c3d7c45       kindnet-hb4bf                                    kube-system
	784231ee46284       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           56 seconds ago      Running             kube-proxy                  0                   08358cec2f2db       kube-proxy-w8h4h                                 kube-system
	c5fc71eb798e5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   de58b3b256b42       etcd-old-k8s-version-094398                      kube-system
	e5693fba04384       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   fdf4bcac577a3       kube-scheduler-old-k8s-version-094398            kube-system
	8223e42bf97a0       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   ef4d1bc0e1654       kube-controller-manager-old-k8s-version-094398   kube-system
	9f6ac155f6a42       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   6a857eb2caf4d       kube-apiserver-old-k8s-version-094398            kube-system
	
	
	==> coredns [4ca19515ea39c3a7dcba93a0c272b27d0f41a8aed7ed1b0673487d1d34e4a94e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35127 - 15892 "HINFO IN 6935948074902101905.2636398165220937640. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0145102s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-094398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-094398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=old-k8s-version-094398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_34_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:34:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-094398
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:36:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:36:19 +0000   Sat, 27 Dec 2025 09:34:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:36:19 +0000   Sat, 27 Dec 2025 09:34:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:36:19 +0000   Sat, 27 Dec 2025 09:34:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:36:19 +0000   Sat, 27 Dec 2025 09:35:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-094398
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                2ce0eeb8-0e2e-4c1d-a2fe-ade8e6b7daeb
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-5dd5756b68-l2f7v                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-old-k8s-version-094398                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-hb4bf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-old-k8s-version-094398             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-094398    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-w8h4h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-old-k8s-version-094398             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vv6xx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-5jv6d             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-094398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node old-k8s-version-094398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s                   node-controller  Node old-k8s-version-094398 event: Registered Node old-k8s-version-094398 in Controller
	  Normal  NodeReady                100s                   kubelet          Node old-k8s-version-094398 status is now: NodeReady
	  Normal  Starting                 60s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 60s)      kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 60s)      kubelet          Node old-k8s-version-094398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 60s)      kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-094398 event: Registered Node old-k8s-version-094398 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [c5fc71eb798e5068dea2558184fc2f1324dfde7c0fb1d8eb63ec2e35afe24f87] <==
	{"level":"info","ts":"2025-12-27T09:35:46.566584Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:35:46.566596Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:35:46.566714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T09:35:46.567418Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-27T09:35:46.567662Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:35:46.568092Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:35:46.57152Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T09:35:46.571812Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T09:35:46.5719Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T09:35:46.572056Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:35:46.572076Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:35:47.558438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:47.558529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:47.558551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:47.558576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:35:47.558589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:35:47.558603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:35:47.558614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:35:47.680208Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-094398 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:35:47.680215Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:35:47.680244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:35:47.680468Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:35:47.68049Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:35:47.681499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T09:35:47.681625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:36:45 up  1:19,  0 user,  load average: 3.05, 3.08, 2.31
	Linux old-k8s-version-094398 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [35bf241ea4f56cd57193378e8b7210de26f0ac1767ec286d23632fe9931444cc] <==
	I1227 09:35:49.611219       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:35:49.611448       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 09:35:49.611907       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:35:49.611967       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:35:49.612009       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:35:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:35:49.907016       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:35:50.006005       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:35:50.006069       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:35:50.006309       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:35:50.306421       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:35:50.306454       1 metrics.go:72] Registering metrics
	I1227 09:35:50.306517       1 controller.go:711] "Syncing nftables rules"
	I1227 09:35:59.905925       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:35:59.905963       1 main.go:301] handling current node
	I1227 09:36:09.905891       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:36:09.905926       1 main.go:301] handling current node
	I1227 09:36:19.906456       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:36:19.906484       1 main.go:301] handling current node
	I1227 09:36:29.905902       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:36:29.905938       1 main.go:301] handling current node
	I1227 09:36:39.913135       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:36:39.913163       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f6ac155f6a42bf5c5966c64219e0b13256e12d474b8bdc4feb2e3846eeca31d] <==
	I1227 09:35:48.831664       1 controller.go:78] Starting OpenAPI AggregationController
	I1227 09:35:48.913666       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:35:48.926948       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 09:35:48.927851       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 09:35:48.928848       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 09:35:48.929299       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 09:35:48.929336       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 09:35:48.929503       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 09:35:48.929661       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 09:35:48.929687       1 aggregator.go:166] initial CRD sync complete...
	I1227 09:35:48.929695       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 09:35:48.929701       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 09:35:48.929707       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:35:48.931905       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 09:35:49.830500       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 09:35:50.087741       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 09:35:50.120190       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 09:35:50.138019       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:35:50.146101       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:35:50.152598       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 09:35:50.192510       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.79.181"}
	I1227 09:35:50.206329       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.41.95"}
	I1227 09:36:01.402418       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 09:36:01.404399       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1227 09:36:01.447335       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8223e42bf97a022ceca335fd381b6b6c3aeac7c607125c2e1cbf3b803876c7ad] <==
	I1227 09:36:01.454816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.469932ms"
	I1227 09:36:01.457703       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1227 09:36:01.458442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.460169ms"
	I1227 09:36:01.459670       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1227 09:36:01.470137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.643692ms"
	I1227 09:36:01.470921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.656µs"
	I1227 09:36:01.471257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.297353ms"
	I1227 09:36:01.471560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="244.081µs"
	I1227 09:36:01.471996       1 shared_informer.go:318] Caches are synced for PV protection
	I1227 09:36:01.473852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="2.250132ms"
	I1227 09:36:01.480621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.682µs"
	I1227 09:36:01.512127       1 shared_informer.go:318] Caches are synced for persistent volume
	I1227 09:36:01.531013       1 shared_informer.go:318] Caches are synced for attach detach
	I1227 09:36:01.927378       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 09:36:01.969819       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 09:36:01.969848       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 09:36:05.060103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.634µs"
	I1227 09:36:06.066832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.043µs"
	I1227 09:36:07.066328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.858µs"
	I1227 09:36:09.072658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="4.734144ms"
	I1227 09:36:09.072766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.699µs"
	I1227 09:36:27.114580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.99µs"
	I1227 09:36:29.014601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.091413ms"
	I1227 09:36:29.014987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="241.018µs"
	I1227 09:36:31.755146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.698µs"
	
	
	==> kube-proxy [784231ee46284fb40820138878c1485b83721a6aee020dd20705f6b5990a1df6] <==
	I1227 09:35:49.438581       1 server_others.go:69] "Using iptables proxy"
	I1227 09:35:49.451569       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1227 09:35:49.479291       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:35:49.482575       1 server_others.go:152] "Using iptables Proxier"
	I1227 09:35:49.482682       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 09:35:49.483001       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 09:35:49.483093       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 09:35:49.483411       1 server.go:846] "Version info" version="v1.28.0"
	I1227 09:35:49.483891       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:35:49.485008       1 config.go:188] "Starting service config controller"
	I1227 09:35:49.485078       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 09:35:49.485126       1 config.go:97] "Starting endpoint slice config controller"
	I1227 09:35:49.485155       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 09:35:49.486356       1 config.go:315] "Starting node config controller"
	I1227 09:35:49.486409       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 09:35:49.586232       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 09:35:49.586426       1 shared_informer.go:318] Caches are synced for service config
	I1227 09:35:49.586770       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e5693fba043840c8a3d1117c5220be21e1cfd4a801e563c4512c5828ce4adbcd] <==
	I1227 09:35:47.146644       1 serving.go:348] Generated self-signed cert in-memory
	W1227 09:35:48.881204       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:35:48.881254       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:35:48.881267       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:35:48.881277       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:35:48.911547       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1227 09:35:48.911696       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:35:48.914618       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:35:48.914703       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 09:35:48.914956       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1227 09:35:48.915057       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 09:35:49.015384       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 09:36:01 old-k8s-version-094398 kubelet[729]: I1227 09:36:01.561085     729 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ed747cfb-658d-49af-a938-883bb87814f1-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-vv6xx\" (UID: \"ed747cfb-658d-49af-a938-883bb87814f1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx"
	Dec 27 09:36:01 old-k8s-version-094398 kubelet[729]: I1227 09:36:01.561169     729 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/77fcfcb8-b434-4190-af5d-903ac4004b5c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-5jv6d\" (UID: \"77fcfcb8-b434-4190-af5d-903ac4004b5c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jv6d"
	Dec 27 09:36:01 old-k8s-version-094398 kubelet[729]: I1227 09:36:01.561218     729 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zwv4\" (UniqueName: \"kubernetes.io/projected/ed747cfb-658d-49af-a938-883bb87814f1-kube-api-access-9zwv4\") pod \"dashboard-metrics-scraper-5f989dc9cf-vv6xx\" (UID: \"ed747cfb-658d-49af-a938-883bb87814f1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx"
	Dec 27 09:36:01 old-k8s-version-094398 kubelet[729]: I1227 09:36:01.561254     729 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grl2t\" (UniqueName: \"kubernetes.io/projected/77fcfcb8-b434-4190-af5d-903ac4004b5c-kube-api-access-grl2t\") pod \"kubernetes-dashboard-8694d4445c-5jv6d\" (UID: \"77fcfcb8-b434-4190-af5d-903ac4004b5c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jv6d"
	Dec 27 09:36:05 old-k8s-version-094398 kubelet[729]: I1227 09:36:05.042534     729 scope.go:117] "RemoveContainer" containerID="5a2061b4773bb46fa9d29ac94c9f071b01ba584f2971f1c2e37bd5dd2ab50286"
	Dec 27 09:36:06 old-k8s-version-094398 kubelet[729]: I1227 09:36:06.048446     729 scope.go:117] "RemoveContainer" containerID="5a2061b4773bb46fa9d29ac94c9f071b01ba584f2971f1c2e37bd5dd2ab50286"
	Dec 27 09:36:06 old-k8s-version-094398 kubelet[729]: I1227 09:36:06.048644     729 scope.go:117] "RemoveContainer" containerID="1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52"
	Dec 27 09:36:06 old-k8s-version-094398 kubelet[729]: E1227 09:36:06.049053     729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vv6xx_kubernetes-dashboard(ed747cfb-658d-49af-a938-883bb87814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx" podUID="ed747cfb-658d-49af-a938-883bb87814f1"
	Dec 27 09:36:07 old-k8s-version-094398 kubelet[729]: I1227 09:36:07.053368     729 scope.go:117] "RemoveContainer" containerID="1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52"
	Dec 27 09:36:07 old-k8s-version-094398 kubelet[729]: E1227 09:36:07.053740     729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vv6xx_kubernetes-dashboard(ed747cfb-658d-49af-a938-883bb87814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx" podUID="ed747cfb-658d-49af-a938-883bb87814f1"
	Dec 27 09:36:09 old-k8s-version-094398 kubelet[729]: I1227 09:36:09.067738     729 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jv6d" podStartSLOduration=1.829931582 podCreationTimestamp="2025-12-27 09:36:01 +0000 UTC" firstStartedPulling="2025-12-27 09:36:01.783252961 +0000 UTC m=+15.935911284" lastFinishedPulling="2025-12-27 09:36:08.020975565 +0000 UTC m=+22.173633702" observedRunningTime="2025-12-27 09:36:09.067575208 +0000 UTC m=+23.220233352" watchObservedRunningTime="2025-12-27 09:36:09.067654 +0000 UTC m=+23.220312143"
	Dec 27 09:36:11 old-k8s-version-094398 kubelet[729]: I1227 09:36:11.744817     729 scope.go:117] "RemoveContainer" containerID="1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52"
	Dec 27 09:36:11 old-k8s-version-094398 kubelet[729]: E1227 09:36:11.745246     729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vv6xx_kubernetes-dashboard(ed747cfb-658d-49af-a938-883bb87814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx" podUID="ed747cfb-658d-49af-a938-883bb87814f1"
	Dec 27 09:36:20 old-k8s-version-094398 kubelet[729]: I1227 09:36:20.082838     729 scope.go:117] "RemoveContainer" containerID="6f2897ce522ad46f87485549185bcbb88c4bb30c132de49b4d738824df1f3616"
	Dec 27 09:36:26 old-k8s-version-094398 kubelet[729]: I1227 09:36:26.955276     729 scope.go:117] "RemoveContainer" containerID="1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52"
	Dec 27 09:36:27 old-k8s-version-094398 kubelet[729]: I1227 09:36:27.102999     729 scope.go:117] "RemoveContainer" containerID="1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52"
	Dec 27 09:36:27 old-k8s-version-094398 kubelet[729]: I1227 09:36:27.103230     729 scope.go:117] "RemoveContainer" containerID="e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a"
	Dec 27 09:36:27 old-k8s-version-094398 kubelet[729]: E1227 09:36:27.103598     729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vv6xx_kubernetes-dashboard(ed747cfb-658d-49af-a938-883bb87814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx" podUID="ed747cfb-658d-49af-a938-883bb87814f1"
	Dec 27 09:36:31 old-k8s-version-094398 kubelet[729]: I1227 09:36:31.743889     729 scope.go:117] "RemoveContainer" containerID="e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a"
	Dec 27 09:36:31 old-k8s-version-094398 kubelet[729]: E1227 09:36:31.744215     729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vv6xx_kubernetes-dashboard(ed747cfb-658d-49af-a938-883bb87814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx" podUID="ed747cfb-658d-49af-a938-883bb87814f1"
	Dec 27 09:36:42 old-k8s-version-094398 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:36:42 old-k8s-version-094398 kubelet[729]: I1227 09:36:42.782246     729 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 09:36:42 old-k8s-version-094398 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:36:42 old-k8s-version-094398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:36:42 old-k8s-version-094398 systemd[1]: kubelet.service: Consumed 1.615s CPU time.
	
	
	==> kubernetes-dashboard [226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965] <==
	2025/12/27 09:36:08 Starting overwatch
	2025/12/27 09:36:08 Using namespace: kubernetes-dashboard
	2025/12/27 09:36:08 Using in-cluster config to connect to apiserver
	2025/12/27 09:36:08 Using secret token for csrf signing
	2025/12/27 09:36:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 09:36:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 09:36:08 Successful initial request to the apiserver, version: v1.28.0
	2025/12/27 09:36:08 Generating JWE encryption key
	2025/12/27 09:36:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 09:36:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 09:36:08 Initializing JWE encryption key from synchronized object
	2025/12/27 09:36:08 Creating in-cluster Sidecar client
	2025/12/27 09:36:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 09:36:08 Serving insecurely on HTTP port: 9090
	2025/12/27 09:36:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be] <==
	I1227 09:36:20.121655       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:36:20.129990       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:36:20.130041       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 09:36:37.525772       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:36:37.525899       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46ce4ee8-5463-4b1a-acf6-c144e54f0eef", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-094398_eeb43a71-c1ba-4383-8e34-c22f4ae0aa6d became leader
	I1227 09:36:37.526004       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-094398_eeb43a71-c1ba-4383-8e34-c22f4ae0aa6d!
	I1227 09:36:37.626267       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-094398_eeb43a71-c1ba-4383-8e34-c22f4ae0aa6d!
	
	
	==> storage-provisioner [6f2897ce522ad46f87485549185bcbb88c4bb30c132de49b4d738824df1f3616] <==
	I1227 09:35:49.382051       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 09:36:19.385357       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094398 -n old-k8s-version-094398
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094398 -n old-k8s-version-094398: exit status 2 (315.895665ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-094398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-094398
helpers_test.go:244: (dbg) docker inspect old-k8s-version-094398:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509",
	        "Created": "2025-12-27T09:34:24.619442272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 613548,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:35:38.646408352Z",
	            "FinishedAt": "2025-12-27T09:35:37.352198603Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/hostname",
	        "HostsPath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/hosts",
	        "LogPath": "/var/lib/docker/containers/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509/bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509-json.log",
	        "Name": "/old-k8s-version-094398",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-094398:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-094398",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bfa8d511275e6666c6e19cf4732b3f642e18ad5c197ea460e951c265ec0a9509",
	                "LowerDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5d0aad007a6182b1053636e6612c4db8810b6db3a9158722f140ef4de1ff740/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-094398",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-094398/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-094398",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-094398",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-094398",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8dd9e3ac46221dc7da71bb433435896ac50996510e66f4be65bb45c511ffc16f",
	            "SandboxKey": "/var/run/docker/netns/8dd9e3ac4622",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-094398": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ba531636d5bcd256e5e0c5cc00963300e9aa97dfe2c1fb4eb178390cd3a90b6",
	                    "EndpointID": "465a187f719232940f3c0decf7840a30e9227b4e721ca2f723bf02d6582378e1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ea:35:8a:38:e3:f1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-094398",
	                        "bfa8d511275e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094398 -n old-k8s-version-094398
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094398 -n old-k8s-version-094398: exit status 2 (314.033849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-094398 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-094398 logs -n 25: (1.015465666s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p test-preload-805186                                                                                                                                                                                                                        │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:34 UTC │
	│ start   │ -p test-preload-805186 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:34 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p cert-expiration-237269 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-237269       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p cert-expiration-237269                                                                                                                                                                                                                     │ cert-expiration-237269       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-094398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p old-k8s-version-094398 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ image   │ test-preload-805186 image list                                                                                                                                                                                                                │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p test-preload-805186                                                                                                                                                                                                                        │ test-preload-805186          │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-917808                                                                                                                                                                                                               │ disable-driver-mounts-917808 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ delete  │ -p stopped-upgrade-196124                                                                                                                                                                                                                     │ stopped-upgrade-196124       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-912564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-912564 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-912564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p no-preload-963457 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-497722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-497722 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ image   │ old-k8s-version-094398 image list --format=json                                                                                                                                                                                               │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ pause   │ -p old-k8s-version-094398 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:36:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:36:15.755856  622335 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:36:15.755997  622335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:15.756005  622335 out.go:374] Setting ErrFile to fd 2...
	I1227 09:36:15.756012  622335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:15.756228  622335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:36:15.756685  622335 out.go:368] Setting JSON to false
	I1227 09:36:15.758150  622335 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4720,"bootTime":1766823456,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:36:15.758213  622335 start.go:143] virtualization: kvm guest
	I1227 09:36:15.759939  622335 out.go:179] * [embed-certs-912564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:36:15.761016  622335 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:36:15.761014  622335 notify.go:221] Checking for updates...
	I1227 09:36:15.763382  622335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:36:15.764638  622335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:15.765807  622335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:36:15.766905  622335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:36:15.767909  622335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:36:15.769291  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:15.769895  622335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:36:15.793686  622335 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:36:15.793853  622335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:15.849675  622335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:36:15.839729427 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:15.849769  622335 docker.go:319] overlay module found
	I1227 09:36:15.851438  622335 out.go:179] * Using the docker driver based on existing profile
	I1227 09:36:15.852555  622335 start.go:309] selected driver: docker
	I1227 09:36:15.852572  622335 start.go:928] validating driver "docker" against &{Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:15.852663  622335 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:36:15.853278  622335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:15.905518  622335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:36:15.896501582 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:15.905807  622335 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:15.905858  622335 cni.go:84] Creating CNI manager for ""
	I1227 09:36:15.905926  622335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:15.905973  622335 start.go:353] cluster config:
	{Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:15.907451  622335 out.go:179] * Starting "embed-certs-912564" primary control-plane node in "embed-certs-912564" cluster
	I1227 09:36:15.908326  622335 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:36:15.909241  622335 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:36:15.910102  622335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:15.910131  622335 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:36:15.910156  622335 cache.go:65] Caching tarball of preloaded images
	I1227 09:36:15.910205  622335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:36:15.910262  622335 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:36:15.910273  622335 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:36:15.910379  622335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/config.json ...
	I1227 09:36:15.929803  622335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:36:15.929822  622335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:36:15.929849  622335 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:36:15.929884  622335 start.go:360] acquireMachinesLock for embed-certs-912564: {Name:mk61b0f1dd44336f66b7ae60f44b102943279f72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:36:15.929937  622335 start.go:364] duration metric: took 35.4µs to acquireMachinesLock for "embed-certs-912564"
	I1227 09:36:15.929953  622335 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:36:15.929958  622335 fix.go:54] fixHost starting: 
	I1227 09:36:15.930186  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:15.946167  622335 fix.go:112] recreateIfNeeded on embed-certs-912564: state=Stopped err=<nil>
	W1227 09:36:15.946200  622335 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 09:36:14.163271  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:16.163948  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:13.261954  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:15.262540  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:17.761765  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:36:17.207000  610436 node_ready.go:49] node "no-preload-963457" is "Ready"
	I1227 09:36:17.207025  610436 node_ready.go:38] duration metric: took 13.502511991s for node "no-preload-963457" to be "Ready" ...
	I1227 09:36:17.207039  610436 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:17.207085  610436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:17.219077  610436 api_server.go:72] duration metric: took 13.880363312s to wait for apiserver process to appear ...
	I1227 09:36:17.219099  610436 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:17.219117  610436 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:36:17.224033  610436 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 09:36:17.225019  610436 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:17.225043  610436 api_server.go:131] duration metric: took 5.936968ms to wait for apiserver health ...
	I1227 09:36:17.225053  610436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:17.227917  610436 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:17.227951  610436 system_pods.go:61] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.227958  610436 system_pods.go:61] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.227966  610436 system_pods.go:61] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.227970  610436 system_pods.go:61] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.227978  610436 system_pods.go:61] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.227987  610436 system_pods.go:61] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.227997  610436 system_pods.go:61] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.228004  610436 system_pods.go:61] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.228015  610436 system_pods.go:74] duration metric: took 2.954672ms to wait for pod list to return data ...
	I1227 09:36:17.228026  610436 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:17.230166  610436 default_sa.go:45] found service account: "default"
	I1227 09:36:17.230187  610436 default_sa.go:55] duration metric: took 2.152948ms for default service account to be created ...
	I1227 09:36:17.230195  610436 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:17.232590  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.232614  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.232621  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.232626  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.232629  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.232633  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.232636  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.232639  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.232647  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.232678  610436 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 09:36:17.541732  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.541764  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:17.541770  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.541776  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.541780  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.541785  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.541815  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.541822  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.541831  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:17.912221  610436 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:17.912247  610436 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Running
	I1227 09:36:17.912252  610436 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running
	I1227 09:36:17.912255  610436 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:36:17.912259  610436 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running
	I1227 09:36:17.912262  610436 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running
	I1227 09:36:17.912265  610436 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:36:17.912269  610436 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running
	I1227 09:36:17.912272  610436 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Running
	I1227 09:36:17.912279  610436 system_pods.go:126] duration metric: took 682.077772ms to wait for k8s-apps to be running ...
	I1227 09:36:17.912286  610436 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:17.912328  610436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:17.925757  610436 system_svc.go:56] duration metric: took 13.459067ms WaitForService to wait for kubelet
	I1227 09:36:17.925808  610436 kubeadm.go:587] duration metric: took 14.587094691s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:17.925832  610436 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:17.928354  610436 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:17.928377  610436 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:17.928390  610436 node_conditions.go:105] duration metric: took 2.552518ms to run NodePressure ...
	I1227 09:36:17.928402  610436 start.go:242] waiting for startup goroutines ...
	I1227 09:36:17.928411  610436 start.go:247] waiting for cluster config update ...
	I1227 09:36:17.928428  610436 start.go:256] writing updated cluster config ...
	I1227 09:36:17.928688  610436 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:17.932505  610436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:18.012128  610436 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.016092  610436 pod_ready.go:94] pod "coredns-7d764666f9-wnzhx" is "Ready"
	I1227 09:36:18.016113  610436 pod_ready.go:86] duration metric: took 3.954033ms for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.018039  610436 pod_ready.go:83] waiting for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.021308  610436 pod_ready.go:94] pod "etcd-no-preload-963457" is "Ready"
	I1227 09:36:18.021328  610436 pod_ready.go:86] duration metric: took 3.271462ms for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.022843  610436 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.025875  610436 pod_ready.go:94] pod "kube-apiserver-no-preload-963457" is "Ready"
	I1227 09:36:18.025892  610436 pod_ready.go:86] duration metric: took 3.027767ms for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.027544  610436 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.336856  610436 pod_ready.go:94] pod "kube-controller-manager-no-preload-963457" is "Ready"
	I1227 09:36:18.336887  610436 pod_ready.go:86] duration metric: took 309.32474ms for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.537212  610436 pod_ready.go:83] waiting for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:18.936288  610436 pod_ready.go:94] pod "kube-proxy-grkqs" is "Ready"
	I1227 09:36:18.936315  610436 pod_ready.go:86] duration metric: took 399.078348ms for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.137254  610436 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.536512  610436 pod_ready.go:94] pod "kube-scheduler-no-preload-963457" is "Ready"
	I1227 09:36:19.536545  610436 pod_ready.go:86] duration metric: took 399.259363ms for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:19.536571  610436 pod_ready.go:40] duration metric: took 1.604026487s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:19.582579  610436 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:36:19.584481  610436 out.go:179] * Done! kubectl is now configured to use "no-preload-963457" cluster and "default" namespace by default
	I1227 09:36:15.947788  622335 out.go:252] * Restarting existing docker container for "embed-certs-912564" ...
	I1227 09:36:15.947868  622335 cli_runner.go:164] Run: docker start embed-certs-912564
	I1227 09:36:16.186477  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:16.204808  622335 kic.go:430] container "embed-certs-912564" state is running.
	I1227 09:36:16.205231  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:16.224487  622335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/config.json ...
	I1227 09:36:16.224742  622335 machine.go:94] provisionDockerMachine start ...
	I1227 09:36:16.224849  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:16.243201  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:16.243427  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:16.243440  622335 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:36:16.244129  622335 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57058->127.0.0.1:33453: read: connection reset by peer
	I1227 09:36:19.367696  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-912564
	
	I1227 09:36:19.367723  622335 ubuntu.go:182] provisioning hostname "embed-certs-912564"
	I1227 09:36:19.367814  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.386757  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.387127  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.387150  622335 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-912564 && echo "embed-certs-912564" | sudo tee /etc/hostname
	I1227 09:36:19.522771  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-912564
	
	I1227 09:36:19.522877  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.543038  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.543358  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.543388  622335 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-912564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-912564/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-912564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:36:19.668353  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:36:19.668380  622335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:36:19.668427  622335 ubuntu.go:190] setting up certificates
	I1227 09:36:19.668447  622335 provision.go:84] configureAuth start
	I1227 09:36:19.668529  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:19.689166  622335 provision.go:143] copyHostCerts
	I1227 09:36:19.689233  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:36:19.689256  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:36:19.689339  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:36:19.689483  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:36:19.689499  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:36:19.689545  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:36:19.689664  622335 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:36:19.689673  622335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:36:19.689711  622335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:36:19.689881  622335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.embed-certs-912564 san=[127.0.0.1 192.168.94.2 embed-certs-912564 localhost minikube]
	I1227 09:36:19.746663  622335 provision.go:177] copyRemoteCerts
	I1227 09:36:19.746730  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:36:19.746782  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.766272  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:19.858141  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:36:19.876224  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1227 09:36:19.894481  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:36:19.911686  622335 provision.go:87] duration metric: took 243.216642ms to configureAuth
	I1227 09:36:19.911711  622335 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:36:19.911915  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:19.912029  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:19.930663  622335 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:19.930962  622335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1227 09:36:19.930983  622335 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:36:20.251003  622335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:36:20.251031  622335 machine.go:97] duration metric: took 4.026272116s to provisionDockerMachine
	I1227 09:36:20.251046  622335 start.go:293] postStartSetup for "embed-certs-912564" (driver="docker")
	I1227 09:36:20.251060  622335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:36:20.251125  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:36:20.251200  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.272340  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.363700  622335 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:36:20.367711  622335 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:36:20.367734  622335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:36:20.367749  622335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:36:20.367820  622335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:36:20.367922  622335 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:36:20.368051  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:36:20.376361  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:20.393895  622335 start.go:296] duration metric: took 142.830385ms for postStartSetup
	I1227 09:36:20.393981  622335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:20.394046  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.412636  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.501303  622335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:36:20.506127  622335 fix.go:56] duration metric: took 4.576160597s for fixHost
	I1227 09:36:20.506154  622335 start.go:83] releasing machines lock for "embed-certs-912564", held for 4.576205681s
	I1227 09:36:20.506231  622335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-912564
	I1227 09:36:20.526289  622335 ssh_runner.go:195] Run: cat /version.json
	I1227 09:36:20.526337  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.526345  622335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:36:20.526445  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:20.546473  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.546990  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:20.635010  622335 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:20.692254  622335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:36:20.729042  622335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:36:20.734159  622335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:36:20.734289  622335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:36:20.742588  622335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:36:20.742612  622335 start.go:496] detecting cgroup driver to use...
	I1227 09:36:20.742656  622335 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:36:20.742708  622335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:36:20.757772  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:36:20.771033  622335 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:36:20.771095  622335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:36:20.785978  622335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:36:20.799169  622335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:36:20.882315  622335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:36:20.965183  622335 docker.go:234] disabling docker service ...
	I1227 09:36:20.965254  622335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:36:20.980266  622335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:36:20.992591  622335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:36:21.074160  622335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:36:21.160689  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:36:21.174204  622335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:36:21.188429  622335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:36:21.188490  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.197653  622335 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:36:21.197706  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.206508  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.215288  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.224635  622335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:36:21.232876  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.241632  622335 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.250258  622335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:21.259256  622335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:36:21.267330  622335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:36:21.274844  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:21.357225  622335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:36:21.513416  622335 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:36:21.513491  622335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:36:21.517802  622335 start.go:574] Will wait 60s for crictl version
	I1227 09:36:21.517863  622335 ssh_runner.go:195] Run: which crictl
	I1227 09:36:21.521539  622335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:36:21.547358  622335 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:36:21.547444  622335 ssh_runner.go:195] Run: crio --version
	I1227 09:36:21.578207  622335 ssh_runner.go:195] Run: crio --version
	I1227 09:36:21.609292  622335 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	W1227 09:36:18.663032  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	W1227 09:36:20.664243  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	I1227 09:36:21.610434  622335 cli_runner.go:164] Run: docker network inspect embed-certs-912564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:21.628243  622335 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1227 09:36:21.632413  622335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:21.642892  622335 kubeadm.go:884] updating cluster {Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:36:21.643006  622335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:21.643062  622335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:21.677448  622335 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:21.677471  622335 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:36:21.677524  622335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:21.703610  622335 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:21.703636  622335 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:36:21.703645  622335 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1227 09:36:21.703772  622335 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-912564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:36:21.703895  622335 ssh_runner.go:195] Run: crio config
	I1227 09:36:21.750305  622335 cni.go:84] Creating CNI manager for ""
	I1227 09:36:21.750333  622335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:21.750350  622335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:36:21.750373  622335 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-912564 NodeName:embed-certs-912564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:36:21.750509  622335 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-912564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:36:21.750578  622335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:36:21.759704  622335 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:36:21.759777  622335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:36:21.768072  622335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 09:36:21.781002  622335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:36:21.793925  622335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 09:36:21.806305  622335 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:36:21.809898  622335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:21.820032  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:21.920758  622335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:21.960171  622335 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564 for IP: 192.168.94.2
	I1227 09:36:21.960196  622335 certs.go:195] generating shared ca certs ...
	I1227 09:36:21.960231  622335 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:21.960474  622335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:36:21.960554  622335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:36:21.960569  622335 certs.go:257] generating profile certs ...
	I1227 09:36:21.960701  622335 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/client.key
	I1227 09:36:21.960779  622335 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key.6601433b
	I1227 09:36:21.960888  622335 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key
	I1227 09:36:21.961033  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:36:21.961086  622335 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:36:21.961113  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:36:21.961150  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:36:21.961186  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:36:21.961225  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:36:21.961298  622335 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:21.962178  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:36:21.985651  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:36:22.006677  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:36:22.029280  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:36:22.054424  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 09:36:22.077264  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:36:22.095602  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:36:22.113971  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/embed-certs-912564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:36:22.131748  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:36:22.149344  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:36:22.167734  622335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:36:22.187888  622335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:36:22.201027  622335 ssh_runner.go:195] Run: openssl version
	I1227 09:36:22.207221  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.214467  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:36:22.221999  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.226212  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.226259  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:22.269710  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:36:22.277804  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.285678  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:36:22.293081  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.297054  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.297104  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:36:22.331452  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:36:22.339171  622335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.347116  622335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:36:22.354513  622335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.358217  622335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.358268  622335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:36:22.394066  622335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:36:22.402772  622335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:36:22.406779  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:36:22.443933  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:36:22.482195  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:36:22.529477  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:36:22.575213  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:36:22.634783  622335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:36:22.671980  622335 kubeadm.go:401] StartCluster: {Name:embed-certs-912564 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-912564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:22.672057  622335 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:36:22.672116  622335 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:36:22.705162  622335 cri.go:96] found id: "5383d4cdce95af97f9b9e8e07db61c856f19c8db586c179d8ff736a43046829e"
	I1227 09:36:22.705186  622335 cri.go:96] found id: "4073c03ac98fe56856e504a1aa0d5a1748d26e6ce500dc31ad8e91ee49384cd6"
	I1227 09:36:22.705192  622335 cri.go:96] found id: "ba83fd494a8c5a1bc7eb22555934e2b74494963aa284a3786fa73f76c60a9175"
	I1227 09:36:22.705196  622335 cri.go:96] found id: "663c76b88f42532f7c763b6916bdc80252b590b27aa690c8fe09d547aca1eb6c"
	I1227 09:36:22.705205  622335 cri.go:96] found id: ""
	I1227 09:36:22.705250  622335 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:36:22.718695  622335 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:22Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:36:22.718785  622335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:36:22.726975  622335 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:36:22.726995  622335 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:36:22.727046  622335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:36:22.736032  622335 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:36:22.737138  622335 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-912564" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:22.737771  622335 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-912564" cluster setting kubeconfig missing "embed-certs-912564" context setting]
	I1227 09:36:22.738693  622335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.740844  622335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:36:22.750818  622335 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1227 09:36:22.750853  622335 kubeadm.go:602] duration metric: took 23.85154ms to restartPrimaryControlPlane
	I1227 09:36:22.750864  622335 kubeadm.go:403] duration metric: took 78.893214ms to StartCluster
	I1227 09:36:22.750883  622335 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.750952  622335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:22.753086  622335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:22.753360  622335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:36:22.753437  622335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:36:22.753532  622335 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-912564"
	I1227 09:36:22.753555  622335 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-912564"
	I1227 09:36:22.753556  622335 addons.go:70] Setting dashboard=true in profile "embed-certs-912564"
	W1227 09:36:22.753563  622335 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:36:22.753573  622335 addons.go:239] Setting addon dashboard=true in "embed-certs-912564"
	W1227 09:36:22.753581  622335 addons.go:248] addon dashboard should already be in state true
	I1227 09:36:22.753576  622335 addons.go:70] Setting default-storageclass=true in profile "embed-certs-912564"
	I1227 09:36:22.753593  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.753606  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.753608  622335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-912564"
	I1227 09:36:22.753609  622335 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:22.753938  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.754139  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.754187  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.779956  622335 out.go:179] * Verifying Kubernetes components...
	I1227 09:36:22.780837  622335 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:36:22.780872  622335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:36:22.781059  622335 addons.go:239] Setting addon default-storageclass=true in "embed-certs-912564"
	W1227 09:36:22.781224  622335 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:36:22.781269  622335 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:36:22.781577  622335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:22.781774  622335 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:36:22.784055  622335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:22.784074  622335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:36:22.784123  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.785174  622335 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1227 09:36:19.763616  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:22.263259  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:36:22.786204  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:36:22.786220  622335 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:36:22.786279  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.807331  622335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:22.807359  622335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:36:22.807427  622335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:36:22.807672  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.811339  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.843561  622335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:36:22.926282  622335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:22.928069  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:22.937857  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:36:22.937883  622335 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:36:22.940823  622335 node_ready.go:35] waiting up to 6m0s for node "embed-certs-912564" to be "Ready" ...
	I1227 09:36:22.952448  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:36:22.952469  622335 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:36:22.954110  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:22.967202  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:36:22.967226  622335 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:36:22.981928  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:36:22.981953  622335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:36:22.998766  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:36:22.998803  622335 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:36:23.012500  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:36:23.012534  622335 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:36:23.025289  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:36:23.025315  622335 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:36:23.037841  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:36:23.037868  622335 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:36:23.050151  622335 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:36:23.050172  622335 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:36:23.063127  622335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:36:24.188905  622335 node_ready.go:49] node "embed-certs-912564" is "Ready"
	I1227 09:36:24.188945  622335 node_ready.go:38] duration metric: took 1.248089417s for node "embed-certs-912564" to be "Ready" ...
	I1227 09:36:24.188966  622335 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:24.189025  622335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:24.706854  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.778752913s)
	I1227 09:36:24.706946  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.75280549s)
	I1227 09:36:24.707028  622335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.643865472s)
	I1227 09:36:24.707081  622335 api_server.go:72] duration metric: took 1.953688648s to wait for apiserver process to appear ...
	I1227 09:36:24.707107  622335 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:24.707132  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:24.708777  622335 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-912564 addons enable metrics-server
	
	I1227 09:36:24.713189  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:36:24.713214  622335 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:36:24.718380  622335 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:36:24.719363  622335 addons.go:530] duration metric: took 1.965938957s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:36:25.207981  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:25.212198  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:36:25.212222  622335 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:36:25.707923  622335 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 09:36:25.712098  622335 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1227 09:36:25.712980  622335 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:25.713005  622335 api_server.go:131] duration metric: took 1.005888464s to wait for apiserver health ...
	I1227 09:36:25.713013  622335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:25.716444  622335 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:25.716489  622335 system_pods.go:61] "coredns-7d764666f9-vm5hp" [e07c8612-a077-44b5-b84f-6dda3bc90a64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:25.716503  622335 system_pods.go:61] "etcd-embed-certs-912564" [05ab5aa9-c66f-449d-bb47-6c48d44d1db7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:36:25.716510  622335 system_pods.go:61] "kindnet-bznfn" [73083928-8435-4e2e-913b-ff93fa424106] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:36:25.716517  622335 system_pods.go:61] "kube-apiserver-embed-certs-912564" [16628f2b-e9fa-4772-b8cb-8ef74d603b7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:36:25.716522  622335 system_pods.go:61] "kube-controller-manager-embed-certs-912564" [78314a52-dd39-4d37-9d70-8002b392e928] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:36:25.716532  622335 system_pods.go:61] "kube-proxy-dv8ch" [2a923e9f-87c7-472f-b5b9-506bcdc67cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:36:25.716537  622335 system_pods.go:61] "kube-scheduler-embed-certs-912564" [81aa6d57-9095-411f-b4a0-653e59fccb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:36:25.716542  622335 system_pods.go:61] "storage-provisioner" [af70aaa7-5435-48e3-8275-f12100402980] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:25.716548  622335 system_pods.go:74] duration metric: took 3.528996ms to wait for pod list to return data ...
	I1227 09:36:25.716555  622335 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:25.719151  622335 default_sa.go:45] found service account: "default"
	I1227 09:36:25.719169  622335 default_sa.go:55] duration metric: took 2.608678ms for default service account to be created ...
	I1227 09:36:25.719176  622335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:25.721738  622335 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:25.721763  622335 system_pods.go:89] "coredns-7d764666f9-vm5hp" [e07c8612-a077-44b5-b84f-6dda3bc90a64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:25.721771  622335 system_pods.go:89] "etcd-embed-certs-912564" [05ab5aa9-c66f-449d-bb47-6c48d44d1db7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:36:25.721778  622335 system_pods.go:89] "kindnet-bznfn" [73083928-8435-4e2e-913b-ff93fa424106] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:36:25.721786  622335 system_pods.go:89] "kube-apiserver-embed-certs-912564" [16628f2b-e9fa-4772-b8cb-8ef74d603b7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:36:25.721805  622335 system_pods.go:89] "kube-controller-manager-embed-certs-912564" [78314a52-dd39-4d37-9d70-8002b392e928] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:36:25.721816  622335 system_pods.go:89] "kube-proxy-dv8ch" [2a923e9f-87c7-472f-b5b9-506bcdc67cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:36:25.721831  622335 system_pods.go:89] "kube-scheduler-embed-certs-912564" [81aa6d57-9095-411f-b4a0-653e59fccb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:36:25.721839  622335 system_pods.go:89] "storage-provisioner" [af70aaa7-5435-48e3-8275-f12100402980] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:25.721850  622335 system_pods.go:126] duration metric: took 2.668061ms to wait for k8s-apps to be running ...
	I1227 09:36:25.721859  622335 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:25.721906  622335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:25.735465  622335 system_svc.go:56] duration metric: took 13.598326ms WaitForService to wait for kubelet
	I1227 09:36:25.735491  622335 kubeadm.go:587] duration metric: took 2.98210021s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:25.735514  622335 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:25.737852  622335 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:25.737873  622335 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:25.737888  622335 node_conditions.go:105] duration metric: took 2.365444ms to run NodePressure ...
	I1227 09:36:25.737899  622335 start.go:242] waiting for startup goroutines ...
	I1227 09:36:25.737908  622335 start.go:247] waiting for cluster config update ...
	I1227 09:36:25.737919  622335 start.go:256] writing updated cluster config ...
	I1227 09:36:25.738145  622335 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:25.741968  622335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:25.745155  622335 pod_ready.go:83] waiting for pod "coredns-7d764666f9-vm5hp" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:36:22.664879  616179 node_ready.go:57] node "default-k8s-diff-port-497722" has "Ready":"False" status (will retry)
	I1227 09:36:23.163413  616179 node_ready.go:49] node "default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:23.163451  616179 node_ready.go:38] duration metric: took 13.503160256s for node "default-k8s-diff-port-497722" to be "Ready" ...
	I1227 09:36:23.163470  616179 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:23.163548  616179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:23.178899  616179 api_server.go:72] duration metric: took 13.808108711s to wait for apiserver process to appear ...
	I1227 09:36:23.178931  616179 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:36:23.178966  616179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1227 09:36:23.183768  616179 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1227 09:36:23.184969  616179 api_server.go:141] control plane version: v1.35.0
	I1227 09:36:23.184994  616179 api_server.go:131] duration metric: took 6.056457ms to wait for apiserver health ...
	I1227 09:36:23.185003  616179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:36:23.188536  616179 system_pods.go:59] 8 kube-system pods found
	I1227 09:36:23.188575  616179 system_pods.go:61] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.188586  616179 system_pods.go:61] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.188599  616179 system_pods.go:61] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.188605  616179 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.188611  616179 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.188619  616179 system_pods.go:61] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.188624  616179 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.188633  616179 system_pods.go:61] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:23.188641  616179 system_pods.go:74] duration metric: took 3.631884ms to wait for pod list to return data ...
	I1227 09:36:23.188655  616179 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:36:23.191155  616179 default_sa.go:45] found service account: "default"
	I1227 09:36:23.191180  616179 default_sa.go:55] duration metric: took 2.516479ms for default service account to be created ...
	I1227 09:36:23.191191  616179 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:36:23.194108  616179 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:23.194138  616179 system_pods.go:89] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.194145  616179 system_pods.go:89] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.194154  616179 system_pods.go:89] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.194160  616179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.194165  616179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.194171  616179 system_pods.go:89] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.194175  616179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.194179  616179 system_pods.go:89] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:36:23.194203  616179 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 09:36:23.406401  616179 system_pods.go:86] 8 kube-system pods found
	I1227 09:36:23.406445  616179 system_pods.go:89] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:36:23.406454  616179 system_pods.go:89] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running
	I1227 09:36:23.406461  616179 system_pods.go:89] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running
	I1227 09:36:23.406467  616179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running
	I1227 09:36:23.406473  616179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running
	I1227 09:36:23.406478  616179 system_pods.go:89] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running
	I1227 09:36:23.406483  616179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running
	I1227 09:36:23.406488  616179 system_pods.go:89] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Running
	I1227 09:36:23.406499  616179 system_pods.go:126] duration metric: took 215.299714ms to wait for k8s-apps to be running ...
	I1227 09:36:23.406513  616179 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:36:23.406568  616179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:23.422554  616179 system_svc.go:56] duration metric: took 16.02927ms WaitForService to wait for kubelet
	I1227 09:36:23.422585  616179 kubeadm.go:587] duration metric: took 14.05180013s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:23.422606  616179 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:36:23.425525  616179 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:36:23.425563  616179 node_conditions.go:123] node cpu capacity is 8
	I1227 09:36:23.425603  616179 node_conditions.go:105] duration metric: took 2.990468ms to run NodePressure ...
	I1227 09:36:23.425622  616179 start.go:242] waiting for startup goroutines ...
	I1227 09:36:23.425633  616179 start.go:247] waiting for cluster config update ...
	I1227 09:36:23.425646  616179 start.go:256] writing updated cluster config ...
	I1227 09:36:23.426029  616179 ssh_runner.go:195] Run: rm -f paused
	I1227 09:36:23.430159  616179 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:23.433730  616179 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.439386  616179 pod_ready.go:94] pod "coredns-7d764666f9-wfv5r" is "Ready"
	I1227 09:36:24.439414  616179 pod_ready.go:86] duration metric: took 1.005660831s for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.442016  616179 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.445826  616179 pod_ready.go:94] pod "etcd-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.445849  616179 pod_ready.go:86] duration metric: took 3.807307ms for pod "etcd-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.447851  616179 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.451495  616179 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.451514  616179 pod_ready.go:86] duration metric: took 3.640701ms for pod "kube-apiserver-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.453395  616179 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.637337  616179 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:24.637370  616179 pod_ready.go:86] duration metric: took 183.957443ms for pod "kube-controller-manager-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:24.837272  616179 pod_ready.go:83] waiting for pod "kube-proxy-6z4vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.238026  616179 pod_ready.go:94] pod "kube-proxy-6z4vt" is "Ready"
	I1227 09:36:25.238052  616179 pod_ready.go:86] duration metric: took 400.752514ms for pod "kube-proxy-6z4vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.437923  616179 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.837173  616179 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-497722" is "Ready"
	I1227 09:36:25.837207  616179 pod_ready.go:86] duration metric: took 399.25682ms for pod "kube-scheduler-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:25.837234  616179 pod_ready.go:40] duration metric: took 2.40703441s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:25.883242  616179 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:36:25.886195  616179 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-497722" cluster and "default" namespace by default
	W1227 09:36:24.265838  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:26.761645  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	W1227 09:36:28.763118  613189 pod_ready.go:104] pod "coredns-5dd5756b68-l2f7v" is not "Ready", error: <nil>
	I1227 09:36:29.267372  613189 pod_ready.go:94] pod "coredns-5dd5756b68-l2f7v" is "Ready"
	I1227 09:36:29.267400  613189 pod_ready.go:86] duration metric: took 39.010575903s for pod "coredns-5dd5756b68-l2f7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.271701  613189 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.276827  613189 pod_ready.go:94] pod "etcd-old-k8s-version-094398" is "Ready"
	I1227 09:36:29.276854  613189 pod_ready.go:86] duration metric: took 5.125471ms for pod "etcd-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.280131  613189 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.285165  613189 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-094398" is "Ready"
	I1227 09:36:29.285233  613189 pod_ready.go:86] duration metric: took 5.074304ms for pod "kube-apiserver-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.288829  613189 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.464310  613189 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-094398" is "Ready"
	I1227 09:36:29.464343  613189 pod_ready.go:86] duration metric: took 175.492277ms for pod "kube-controller-manager-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:29.664360  613189 pod_ready.go:83] waiting for pod "kube-proxy-w8h4h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.063146  613189 pod_ready.go:94] pod "kube-proxy-w8h4h" is "Ready"
	I1227 09:36:30.063178  613189 pod_ready.go:86] duration metric: took 398.787394ms for pod "kube-proxy-w8h4h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.264144  613189 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.663989  613189 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-094398" is "Ready"
	I1227 09:36:30.664030  613189 pod_ready.go:86] duration metric: took 399.855087ms for pod "kube-scheduler-old-k8s-version-094398" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:30.664045  613189 pod_ready.go:40] duration metric: took 40.412649094s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:30.710355  613189 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1227 09:36:30.713249  613189 out.go:203] 
	W1227 09:36:30.714603  613189 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 09:36:30.715969  613189 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 09:36:30.717115  613189 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-094398" cluster and "default" namespace by default
	W1227 09:36:27.750662  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:29.752327  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:32.251754  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:34.327887  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:36.751189  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:39.251134  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:41.750390  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	W1227 09:36:44.249964  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 09:36:08 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:08.061188144Z" level=info msg="Created container 226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jv6d/kubernetes-dashboard" id=faee3fee-1943-48f9-b200-23a2db4082aa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:08 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:08.061674832Z" level=info msg="Starting container: 226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965" id=57e9101c-6a7f-4e32-ad7d-d277dc4f41fc name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:08 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:08.06332045Z" level=info msg="Started container" PID=1729 containerID=226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jv6d/kubernetes-dashboard id=57e9101c-6a7f-4e32-ad7d-d277dc4f41fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=9e55c04dd102e348cfa24aae353b2626b5dc33cd608dbbb9064beb6639febf31
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.083243576Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=48f22a67-b88c-483f-aff9-75038bde454e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.084182551Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=98969ea1-00df-47bb-ac1f-cd1b009c3620 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.085162176Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=dafc2881-b7fc-4efb-8345-711bcf3e2552 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.085302002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.089275067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.089422222Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/452ed8e95a5819759de91ef7b1f62129fe64a168aac652898c88417b693ceb0c/merged/etc/passwd: no such file or directory"
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.089445982Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/452ed8e95a5819759de91ef7b1f62129fe64a168aac652898c88417b693ceb0c/merged/etc/group: no such file or directory"
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.089654808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.107442029Z" level=info msg="Created container 2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be: kube-system/storage-provisioner/storage-provisioner" id=dafc2881-b7fc-4efb-8345-711bcf3e2552 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.10808415Z" level=info msg="Starting container: 2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be" id=db4e3b1b-2783-49f6-a6cf-1b10dd76df77 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:20 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:20.109989891Z" level=info msg="Started container" PID=1755 containerID=2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be description=kube-system/storage-provisioner/storage-provisioner id=db4e3b1b-2783-49f6-a6cf-1b10dd76df77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d59a3e614ede54f9dc6f5940addd9ab4fd1aa4d04ed10015b8ec9cea64c3f94f
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.955895586Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3ec6a62-f9cf-4145-841c-de30c6163485 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.956886271Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f7c7276a-55f5-427a-b8d2-21a8d70af59a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.957914894Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx/dashboard-metrics-scraper" id=e6476923-fb69-4afe-aaf0-3cf3ed7106ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.958067203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.963540072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.964139334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.98956236Z" level=info msg="Created container e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx/dashboard-metrics-scraper" id=e6476923-fb69-4afe-aaf0-3cf3ed7106ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.990175047Z" level=info msg="Starting container: e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a" id=649f1c18-1700-44f6-9cd7-934895851e92 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:26 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:26.99204128Z" level=info msg="Started container" PID=1771 containerID=e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx/dashboard-metrics-scraper id=649f1c18-1700-44f6-9cd7-934895851e92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=08f15750dc8398826d19b2b27d6f98622ddbad642016bccfdfbb7d38b46f7081
	Dec 27 09:36:27 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:27.10423379Z" level=info msg="Removing container: 1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52" id=f85bac27-dfe0-494e-b6cf-46c8c036ae0e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:36:27 old-k8s-version-094398 crio[568]: time="2025-12-27T09:36:27.113660297Z" level=info msg="Removed container 1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx/dashboard-metrics-scraper" id=f85bac27-dfe0-494e-b6cf-46c8c036ae0e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e515e63f9478f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   08f15750dc839       dashboard-metrics-scraper-5f989dc9cf-vv6xx       kubernetes-dashboard
	2613f18335698       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   d59a3e614ede5       storage-provisioner                              kube-system
	226e6694486c7       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago       Running             kubernetes-dashboard        0                   9e55c04dd102e       kubernetes-dashboard-8694d4445c-5jv6d            kubernetes-dashboard
	4ca19515ea39c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           58 seconds ago       Running             coredns                     0                   874369c605ecc       coredns-5dd5756b68-l2f7v                         kube-system
	a2edc52c4d7b9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   2bdbb562f4ba8       busybox                                          default
	6f2897ce522ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   d59a3e614ede5       storage-provisioner                              kube-system
	35bf241ea4f56       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           58 seconds ago       Running             kindnet-cni                 0                   794b80c3d7c45       kindnet-hb4bf                                    kube-system
	784231ee46284       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           58 seconds ago       Running             kube-proxy                  0                   08358cec2f2db       kube-proxy-w8h4h                                 kube-system
	c5fc71eb798e5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   de58b3b256b42       etcd-old-k8s-version-094398                      kube-system
	e5693fba04384       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   fdf4bcac577a3       kube-scheduler-old-k8s-version-094398            kube-system
	8223e42bf97a0       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   ef4d1bc0e1654       kube-controller-manager-old-k8s-version-094398   kube-system
	9f6ac155f6a42       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   6a857eb2caf4d       kube-apiserver-old-k8s-version-094398            kube-system
	
	
	==> coredns [4ca19515ea39c3a7dcba93a0c272b27d0f41a8aed7ed1b0673487d1d34e4a94e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35127 - 15892 "HINFO IN 6935948074902101905.2636398165220937640. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0145102s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-094398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-094398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=old-k8s-version-094398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_34_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:34:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-094398
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:36:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:36:19 +0000   Sat, 27 Dec 2025 09:34:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:36:19 +0000   Sat, 27 Dec 2025 09:34:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:36:19 +0000   Sat, 27 Dec 2025 09:34:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:36:19 +0000   Sat, 27 Dec 2025 09:35:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-094398
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                2ce0eeb8-0e2e-4c1d-a2fe-ade8e6b7daeb
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-5dd5756b68-l2f7v                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     115s
	  kube-system                 etcd-old-k8s-version-094398                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m8s
	  kube-system                 kindnet-hb4bf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-old-k8s-version-094398             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-old-k8s-version-094398    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-proxy-w8h4h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-old-k8s-version-094398             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vv6xx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-5jv6d             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 114s                   kube-proxy       
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-094398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m8s                   kubelet          Node old-k8s-version-094398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m8s                   kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m8s                   kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           116s                   node-controller  Node old-k8s-version-094398 event: Registered Node old-k8s-version-094398 in Controller
	  Normal  NodeReady                102s                   kubelet          Node old-k8s-version-094398 status is now: NodeReady
	  Normal  Starting                 62s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 62s)      kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 62s)      kubelet          Node old-k8s-version-094398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 62s)      kubelet          Node old-k8s-version-094398 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node old-k8s-version-094398 event: Registered Node old-k8s-version-094398 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [c5fc71eb798e5068dea2558184fc2f1324dfde7c0fb1d8eb63ec2e35afe24f87] <==
	{"level":"info","ts":"2025-12-27T09:35:46.566584Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:35:46.566596Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:35:46.566714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T09:35:46.567418Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-27T09:35:46.567662Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:35:46.568092Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:35:46.57152Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T09:35:46.571812Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T09:35:46.5719Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T09:35:46.572056Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:35:46.572076Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:35:47.558438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:47.558529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:47.558551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T09:35:47.558576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:35:47.558589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:35:47.558603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:35:47.558614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:35:47.680208Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-094398 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:35:47.680215Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:35:47.680244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:35:47.680468Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:35:47.68049Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:35:47.681499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T09:35:47.681625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:36:47 up  1:19,  0 user,  load average: 3.05, 3.08, 2.31
	Linux old-k8s-version-094398 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [35bf241ea4f56cd57193378e8b7210de26f0ac1767ec286d23632fe9931444cc] <==
	I1227 09:35:49.611219       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:35:49.611448       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 09:35:49.611907       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:35:49.611967       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:35:49.612009       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:35:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:35:49.907016       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:35:50.006005       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:35:50.006069       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:35:50.006309       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:35:50.306421       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:35:50.306454       1 metrics.go:72] Registering metrics
	I1227 09:35:50.306517       1 controller.go:711] "Syncing nftables rules"
	I1227 09:35:59.905925       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:35:59.905963       1 main.go:301] handling current node
	I1227 09:36:09.905891       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:36:09.905926       1 main.go:301] handling current node
	I1227 09:36:19.906456       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:36:19.906484       1 main.go:301] handling current node
	I1227 09:36:29.905902       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:36:29.905938       1 main.go:301] handling current node
	I1227 09:36:39.913135       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:36:39.913163       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f6ac155f6a42bf5c5966c64219e0b13256e12d474b8bdc4feb2e3846eeca31d] <==
	I1227 09:35:48.831664       1 controller.go:78] Starting OpenAPI AggregationController
	I1227 09:35:48.913666       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:35:48.926948       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 09:35:48.927851       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 09:35:48.928848       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 09:35:48.929299       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 09:35:48.929336       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 09:35:48.929503       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 09:35:48.929661       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 09:35:48.929687       1 aggregator.go:166] initial CRD sync complete...
	I1227 09:35:48.929695       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 09:35:48.929701       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 09:35:48.929707       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:35:48.931905       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 09:35:49.830500       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 09:35:50.087741       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 09:35:50.120190       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 09:35:50.138019       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:35:50.146101       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:35:50.152598       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 09:35:50.192510       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.79.181"}
	I1227 09:35:50.206329       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.41.95"}
	I1227 09:36:01.402418       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 09:36:01.404399       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1227 09:36:01.447335       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8223e42bf97a022ceca335fd381b6b6c3aeac7c607125c2e1cbf3b803876c7ad] <==
	I1227 09:36:01.454816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.469932ms"
	I1227 09:36:01.457703       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1227 09:36:01.458442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.460169ms"
	I1227 09:36:01.459670       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1227 09:36:01.470137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.643692ms"
	I1227 09:36:01.470921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.656µs"
	I1227 09:36:01.471257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.297353ms"
	I1227 09:36:01.471560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="244.081µs"
	I1227 09:36:01.471996       1 shared_informer.go:318] Caches are synced for PV protection
	I1227 09:36:01.473852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="2.250132ms"
	I1227 09:36:01.480621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.682µs"
	I1227 09:36:01.512127       1 shared_informer.go:318] Caches are synced for persistent volume
	I1227 09:36:01.531013       1 shared_informer.go:318] Caches are synced for attach detach
	I1227 09:36:01.927378       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 09:36:01.969819       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 09:36:01.969848       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 09:36:05.060103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.634µs"
	I1227 09:36:06.066832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.043µs"
	I1227 09:36:07.066328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.858µs"
	I1227 09:36:09.072658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="4.734144ms"
	I1227 09:36:09.072766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.699µs"
	I1227 09:36:27.114580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.99µs"
	I1227 09:36:29.014601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.091413ms"
	I1227 09:36:29.014987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="241.018µs"
	I1227 09:36:31.755146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.698µs"
	
	
	==> kube-proxy [784231ee46284fb40820138878c1485b83721a6aee020dd20705f6b5990a1df6] <==
	I1227 09:35:49.438581       1 server_others.go:69] "Using iptables proxy"
	I1227 09:35:49.451569       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1227 09:35:49.479291       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:35:49.482575       1 server_others.go:152] "Using iptables Proxier"
	I1227 09:35:49.482682       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 09:35:49.483001       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 09:35:49.483093       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 09:35:49.483411       1 server.go:846] "Version info" version="v1.28.0"
	I1227 09:35:49.483891       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:35:49.485008       1 config.go:188] "Starting service config controller"
	I1227 09:35:49.485078       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 09:35:49.485126       1 config.go:97] "Starting endpoint slice config controller"
	I1227 09:35:49.485155       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 09:35:49.486356       1 config.go:315] "Starting node config controller"
	I1227 09:35:49.486409       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 09:35:49.586232       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 09:35:49.586426       1 shared_informer.go:318] Caches are synced for service config
	I1227 09:35:49.586770       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e5693fba043840c8a3d1117c5220be21e1cfd4a801e563c4512c5828ce4adbcd] <==
	I1227 09:35:47.146644       1 serving.go:348] Generated self-signed cert in-memory
	W1227 09:35:48.881204       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:35:48.881254       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:35:48.881267       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:35:48.881277       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:35:48.911547       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1227 09:35:48.911696       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:35:48.914618       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:35:48.914703       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 09:35:48.914956       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1227 09:35:48.915057       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 09:35:49.015384       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 09:36:01 old-k8s-version-094398 kubelet[729]: I1227 09:36:01.561085     729 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ed747cfb-658d-49af-a938-883bb87814f1-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-vv6xx\" (UID: \"ed747cfb-658d-49af-a938-883bb87814f1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx"
	Dec 27 09:36:01 old-k8s-version-094398 kubelet[729]: I1227 09:36:01.561169     729 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/77fcfcb8-b434-4190-af5d-903ac4004b5c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-5jv6d\" (UID: \"77fcfcb8-b434-4190-af5d-903ac4004b5c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jv6d"
	Dec 27 09:36:01 old-k8s-version-094398 kubelet[729]: I1227 09:36:01.561218     729 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zwv4\" (UniqueName: \"kubernetes.io/projected/ed747cfb-658d-49af-a938-883bb87814f1-kube-api-access-9zwv4\") pod \"dashboard-metrics-scraper-5f989dc9cf-vv6xx\" (UID: \"ed747cfb-658d-49af-a938-883bb87814f1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx"
	Dec 27 09:36:01 old-k8s-version-094398 kubelet[729]: I1227 09:36:01.561254     729 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grl2t\" (UniqueName: \"kubernetes.io/projected/77fcfcb8-b434-4190-af5d-903ac4004b5c-kube-api-access-grl2t\") pod \"kubernetes-dashboard-8694d4445c-5jv6d\" (UID: \"77fcfcb8-b434-4190-af5d-903ac4004b5c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jv6d"
	Dec 27 09:36:05 old-k8s-version-094398 kubelet[729]: I1227 09:36:05.042534     729 scope.go:117] "RemoveContainer" containerID="5a2061b4773bb46fa9d29ac94c9f071b01ba584f2971f1c2e37bd5dd2ab50286"
	Dec 27 09:36:06 old-k8s-version-094398 kubelet[729]: I1227 09:36:06.048446     729 scope.go:117] "RemoveContainer" containerID="5a2061b4773bb46fa9d29ac94c9f071b01ba584f2971f1c2e37bd5dd2ab50286"
	Dec 27 09:36:06 old-k8s-version-094398 kubelet[729]: I1227 09:36:06.048644     729 scope.go:117] "RemoveContainer" containerID="1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52"
	Dec 27 09:36:06 old-k8s-version-094398 kubelet[729]: E1227 09:36:06.049053     729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vv6xx_kubernetes-dashboard(ed747cfb-658d-49af-a938-883bb87814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx" podUID="ed747cfb-658d-49af-a938-883bb87814f1"
	Dec 27 09:36:07 old-k8s-version-094398 kubelet[729]: I1227 09:36:07.053368     729 scope.go:117] "RemoveContainer" containerID="1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52"
	Dec 27 09:36:07 old-k8s-version-094398 kubelet[729]: E1227 09:36:07.053740     729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vv6xx_kubernetes-dashboard(ed747cfb-658d-49af-a938-883bb87814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx" podUID="ed747cfb-658d-49af-a938-883bb87814f1"
	Dec 27 09:36:09 old-k8s-version-094398 kubelet[729]: I1227 09:36:09.067738     729 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jv6d" podStartSLOduration=1.829931582 podCreationTimestamp="2025-12-27 09:36:01 +0000 UTC" firstStartedPulling="2025-12-27 09:36:01.783252961 +0000 UTC m=+15.935911284" lastFinishedPulling="2025-12-27 09:36:08.020975565 +0000 UTC m=+22.173633702" observedRunningTime="2025-12-27 09:36:09.067575208 +0000 UTC m=+23.220233352" watchObservedRunningTime="2025-12-27 09:36:09.067654 +0000 UTC m=+23.220312143"
	Dec 27 09:36:11 old-k8s-version-094398 kubelet[729]: I1227 09:36:11.744817     729 scope.go:117] "RemoveContainer" containerID="1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52"
	Dec 27 09:36:11 old-k8s-version-094398 kubelet[729]: E1227 09:36:11.745246     729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vv6xx_kubernetes-dashboard(ed747cfb-658d-49af-a938-883bb87814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx" podUID="ed747cfb-658d-49af-a938-883bb87814f1"
	Dec 27 09:36:20 old-k8s-version-094398 kubelet[729]: I1227 09:36:20.082838     729 scope.go:117] "RemoveContainer" containerID="6f2897ce522ad46f87485549185bcbb88c4bb30c132de49b4d738824df1f3616"
	Dec 27 09:36:26 old-k8s-version-094398 kubelet[729]: I1227 09:36:26.955276     729 scope.go:117] "RemoveContainer" containerID="1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52"
	Dec 27 09:36:27 old-k8s-version-094398 kubelet[729]: I1227 09:36:27.102999     729 scope.go:117] "RemoveContainer" containerID="1679ca42635a728007911cb9006ec19e95201273d41d4c1933cc2a77a19eaa52"
	Dec 27 09:36:27 old-k8s-version-094398 kubelet[729]: I1227 09:36:27.103230     729 scope.go:117] "RemoveContainer" containerID="e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a"
	Dec 27 09:36:27 old-k8s-version-094398 kubelet[729]: E1227 09:36:27.103598     729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vv6xx_kubernetes-dashboard(ed747cfb-658d-49af-a938-883bb87814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx" podUID="ed747cfb-658d-49af-a938-883bb87814f1"
	Dec 27 09:36:31 old-k8s-version-094398 kubelet[729]: I1227 09:36:31.743889     729 scope.go:117] "RemoveContainer" containerID="e515e63f9478f3764e817ba81f243e7bec7930bfcdb3d016306773fdc664f99a"
	Dec 27 09:36:31 old-k8s-version-094398 kubelet[729]: E1227 09:36:31.744215     729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vv6xx_kubernetes-dashboard(ed747cfb-658d-49af-a938-883bb87814f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vv6xx" podUID="ed747cfb-658d-49af-a938-883bb87814f1"
	Dec 27 09:36:42 old-k8s-version-094398 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:36:42 old-k8s-version-094398 kubelet[729]: I1227 09:36:42.782246     729 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 09:36:42 old-k8s-version-094398 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:36:42 old-k8s-version-094398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:36:42 old-k8s-version-094398 systemd[1]: kubelet.service: Consumed 1.615s CPU time.
	
	
	==> kubernetes-dashboard [226e6694486c7dbda528967c387d7f6eba3323ceedc06d943992c2aa1a31b965] <==
	2025/12/27 09:36:08 Using namespace: kubernetes-dashboard
	2025/12/27 09:36:08 Using in-cluster config to connect to apiserver
	2025/12/27 09:36:08 Using secret token for csrf signing
	2025/12/27 09:36:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 09:36:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 09:36:08 Successful initial request to the apiserver, version: v1.28.0
	2025/12/27 09:36:08 Generating JWE encryption key
	2025/12/27 09:36:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 09:36:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 09:36:08 Initializing JWE encryption key from synchronized object
	2025/12/27 09:36:08 Creating in-cluster Sidecar client
	2025/12/27 09:36:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 09:36:08 Serving insecurely on HTTP port: 9090
	2025/12/27 09:36:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 09:36:08 Starting overwatch
	
	
	==> storage-provisioner [2613f183356984cb4a6f283d937f3370b199b0a7966e716e98cf84ee962e01be] <==
	I1227 09:36:20.121655       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:36:20.129990       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:36:20.130041       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 09:36:37.525772       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:36:37.525899       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46ce4ee8-5463-4b1a-acf6-c144e54f0eef", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-094398_eeb43a71-c1ba-4383-8e34-c22f4ae0aa6d became leader
	I1227 09:36:37.526004       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-094398_eeb43a71-c1ba-4383-8e34-c22f4ae0aa6d!
	I1227 09:36:37.626267       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-094398_eeb43a71-c1ba-4383-8e34-c22f4ae0aa6d!
	
	
	==> storage-provisioner [6f2897ce522ad46f87485549185bcbb88c4bb30c132de49b4d738824df1f3616] <==
	I1227 09:35:49.382051       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 09:36:19.385357       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094398 -n old-k8s-version-094398
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094398 -n old-k8s-version-094398: exit status 2 (320.303786ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-094398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-912564 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-912564 --alsologtostderr -v=1: exit status 80 (2.539120108s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-912564 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:37:12.615748  637026 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:12.616055  637026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:12.616067  637026 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:12.616075  637026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:12.616284  637026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:37:12.616542  637026 out.go:368] Setting JSON to false
	I1227 09:37:12.616563  637026 mustload.go:66] Loading cluster: embed-certs-912564
	I1227 09:37:12.616945  637026 config.go:182] Loaded profile config "embed-certs-912564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:12.617356  637026 cli_runner.go:164] Run: docker container inspect embed-certs-912564 --format={{.State.Status}}
	I1227 09:37:12.636747  637026 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:37:12.637068  637026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:12.692567  637026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-27 09:37:12.682392847 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:12.693204  637026 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-912564 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 09:37:12.694898  637026 out.go:179] * Pausing node embed-certs-912564 ... 
	I1227 09:37:12.695924  637026 host.go:66] Checking if "embed-certs-912564" exists ...
	I1227 09:37:12.696232  637026 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:12.696289  637026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-912564
	I1227 09:37:12.712867  637026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/embed-certs-912564/id_rsa Username:docker}
	I1227 09:37:12.802653  637026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:12.815512  637026 pause.go:52] kubelet running: true
	I1227 09:37:12.815575  637026 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:13.015309  637026 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:13.015392  637026 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:13.081823  637026 cri.go:96] found id: "84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b"
	I1227 09:37:13.081848  637026 cri.go:96] found id: "bbe499943555262124a4668032443d02c3df7d492d67bc1fcde5ffe6d8bfbec7"
	I1227 09:37:13.081855  637026 cri.go:96] found id: "e321884e2b0761fec8e2206091da271f0e89b9140101ad1d66d55d4f2d049606"
	I1227 09:37:13.081859  637026 cri.go:96] found id: "d3dff99ecfa4aa14f6bbd97b1487dfae36574c672747a8bf6c8790ecad04653a"
	I1227 09:37:13.081864  637026 cri.go:96] found id: "7281d5c2323a07673639860e705dd2779b623d238dca9f09c1c16c035ce01a03"
	I1227 09:37:13.081869  637026 cri.go:96] found id: "5383d4cdce95af97f9b9e8e07db61c856f19c8db586c179d8ff736a43046829e"
	I1227 09:37:13.081873  637026 cri.go:96] found id: "4073c03ac98fe56856e504a1aa0d5a1748d26e6ce500dc31ad8e91ee49384cd6"
	I1227 09:37:13.081878  637026 cri.go:96] found id: "ba83fd494a8c5a1bc7eb22555934e2b74494963aa284a3786fa73f76c60a9175"
	I1227 09:37:13.081881  637026 cri.go:96] found id: "663c76b88f42532f7c763b6916bdc80252b590b27aa690c8fe09d547aca1eb6c"
	I1227 09:37:13.081906  637026 cri.go:96] found id: "6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a"
	I1227 09:37:13.081915  637026 cri.go:96] found id: "6014177c8a2040f77b17c42e8d6d1005c64bd49b4525e35b0d0a748ac43eeb31"
	I1227 09:37:13.081919  637026 cri.go:96] found id: ""
	I1227 09:37:13.081976  637026 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:13.095416  637026 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:13Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:37:13.305873  637026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:13.323187  637026 pause.go:52] kubelet running: false
	I1227 09:37:13.323278  637026 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:13.575823  637026 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:13.575903  637026 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:13.686222  637026 cri.go:96] found id: "84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b"
	I1227 09:37:13.686250  637026 cri.go:96] found id: "bbe499943555262124a4668032443d02c3df7d492d67bc1fcde5ffe6d8bfbec7"
	I1227 09:37:13.686256  637026 cri.go:96] found id: "e321884e2b0761fec8e2206091da271f0e89b9140101ad1d66d55d4f2d049606"
	I1227 09:37:13.686260  637026 cri.go:96] found id: "d3dff99ecfa4aa14f6bbd97b1487dfae36574c672747a8bf6c8790ecad04653a"
	I1227 09:37:13.686265  637026 cri.go:96] found id: "7281d5c2323a07673639860e705dd2779b623d238dca9f09c1c16c035ce01a03"
	I1227 09:37:13.686270  637026 cri.go:96] found id: "5383d4cdce95af97f9b9e8e07db61c856f19c8db586c179d8ff736a43046829e"
	I1227 09:37:13.686274  637026 cri.go:96] found id: "4073c03ac98fe56856e504a1aa0d5a1748d26e6ce500dc31ad8e91ee49384cd6"
	I1227 09:37:13.686278  637026 cri.go:96] found id: "ba83fd494a8c5a1bc7eb22555934e2b74494963aa284a3786fa73f76c60a9175"
	I1227 09:37:13.686282  637026 cri.go:96] found id: "663c76b88f42532f7c763b6916bdc80252b590b27aa690c8fe09d547aca1eb6c"
	I1227 09:37:13.686300  637026 cri.go:96] found id: "6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a"
	I1227 09:37:13.686305  637026 cri.go:96] found id: "6014177c8a2040f77b17c42e8d6d1005c64bd49b4525e35b0d0a748ac43eeb31"
	I1227 09:37:13.686309  637026 cri.go:96] found id: ""
	I1227 09:37:13.686359  637026 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:13.946742  637026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:13.966427  637026 pause.go:52] kubelet running: false
	I1227 09:37:13.966490  637026 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:14.192220  637026 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:14.192303  637026 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:14.277587  637026 cri.go:96] found id: "84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b"
	I1227 09:37:14.277614  637026 cri.go:96] found id: "bbe499943555262124a4668032443d02c3df7d492d67bc1fcde5ffe6d8bfbec7"
	I1227 09:37:14.277620  637026 cri.go:96] found id: "e321884e2b0761fec8e2206091da271f0e89b9140101ad1d66d55d4f2d049606"
	I1227 09:37:14.277625  637026 cri.go:96] found id: "d3dff99ecfa4aa14f6bbd97b1487dfae36574c672747a8bf6c8790ecad04653a"
	I1227 09:37:14.277629  637026 cri.go:96] found id: "7281d5c2323a07673639860e705dd2779b623d238dca9f09c1c16c035ce01a03"
	I1227 09:37:14.277633  637026 cri.go:96] found id: "5383d4cdce95af97f9b9e8e07db61c856f19c8db586c179d8ff736a43046829e"
	I1227 09:37:14.277638  637026 cri.go:96] found id: "4073c03ac98fe56856e504a1aa0d5a1748d26e6ce500dc31ad8e91ee49384cd6"
	I1227 09:37:14.277643  637026 cri.go:96] found id: "ba83fd494a8c5a1bc7eb22555934e2b74494963aa284a3786fa73f76c60a9175"
	I1227 09:37:14.277648  637026 cri.go:96] found id: "663c76b88f42532f7c763b6916bdc80252b590b27aa690c8fe09d547aca1eb6c"
	I1227 09:37:14.277656  637026 cri.go:96] found id: "6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a"
	I1227 09:37:14.277660  637026 cri.go:96] found id: "6014177c8a2040f77b17c42e8d6d1005c64bd49b4525e35b0d0a748ac43eeb31"
	I1227 09:37:14.277665  637026 cri.go:96] found id: ""
	I1227 09:37:14.277713  637026 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:14.731275  637026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:14.750238  637026 pause.go:52] kubelet running: false
	I1227 09:37:14.750300  637026 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:14.967310  637026 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:14.967416  637026 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:15.070632  637026 cri.go:96] found id: "84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b"
	I1227 09:37:15.070670  637026 cri.go:96] found id: "bbe499943555262124a4668032443d02c3df7d492d67bc1fcde5ffe6d8bfbec7"
	I1227 09:37:15.070676  637026 cri.go:96] found id: "e321884e2b0761fec8e2206091da271f0e89b9140101ad1d66d55d4f2d049606"
	I1227 09:37:15.070681  637026 cri.go:96] found id: "d3dff99ecfa4aa14f6bbd97b1487dfae36574c672747a8bf6c8790ecad04653a"
	I1227 09:37:15.070686  637026 cri.go:96] found id: "7281d5c2323a07673639860e705dd2779b623d238dca9f09c1c16c035ce01a03"
	I1227 09:37:15.070691  637026 cri.go:96] found id: "5383d4cdce95af97f9b9e8e07db61c856f19c8db586c179d8ff736a43046829e"
	I1227 09:37:15.070696  637026 cri.go:96] found id: "4073c03ac98fe56856e504a1aa0d5a1748d26e6ce500dc31ad8e91ee49384cd6"
	I1227 09:37:15.070700  637026 cri.go:96] found id: "ba83fd494a8c5a1bc7eb22555934e2b74494963aa284a3786fa73f76c60a9175"
	I1227 09:37:15.070704  637026 cri.go:96] found id: "663c76b88f42532f7c763b6916bdc80252b590b27aa690c8fe09d547aca1eb6c"
	I1227 09:37:15.070712  637026 cri.go:96] found id: "6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a"
	I1227 09:37:15.070717  637026 cri.go:96] found id: "6014177c8a2040f77b17c42e8d6d1005c64bd49b4525e35b0d0a748ac43eeb31"
	I1227 09:37:15.071094  637026 cri.go:96] found id: ""
	I1227 09:37:15.071163  637026 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:15.088933  637026 out.go:203] 
	W1227 09:37:15.090059  637026 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:37:15.090083  637026 out.go:285] * 
	* 
	W1227 09:37:15.093374  637026 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:37:15.094627  637026 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-912564 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-912564
helpers_test.go:244: (dbg) docker inspect embed-certs-912564:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8",
	        "Created": "2025-12-27T09:35:13.90835085Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 622541,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:36:15.973292467Z",
	            "FinishedAt": "2025-12-27T09:36:14.524926326Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/hosts",
	        "LogPath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8-json.log",
	        "Name": "/embed-certs-912564",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-912564:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-912564",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8",
	                "LowerDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996/merged",
	                "UpperDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996/diff",
	                "WorkDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-912564",
	                "Source": "/var/lib/docker/volumes/embed-certs-912564/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-912564",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-912564",
	                "name.minikube.sigs.k8s.io": "embed-certs-912564",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d749012f8156d0aba2cde7e9a914c49b96e9059f4ffbfc3b583ff42b55f235b2",
	            "SandboxKey": "/var/run/docker/netns/d749012f8156",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-912564": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a8636b7bd1bb3a484b5591e16629c2b067fb4955cb3fcafbd69f576a7b19eb9b",
	                    "EndpointID": "5dc9accb47f9af8c718d36522128972b22a96cd365e1de1c03caccfdb94aa446",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "16:29:ba:06:27:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-912564",
	                        "d1131cb70c56"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-912564 -n embed-certs-912564
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-912564 -n embed-certs-912564: exit status 2 (446.844583ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-912564 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-912564 logs -n 25: (2.147520804s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ delete  │ -p stopped-upgrade-196124                                                                                                                                                                                                                     │ stopped-upgrade-196124       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-912564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-912564 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-912564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p no-preload-963457 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-497722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-497722 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ image   │ old-k8s-version-094398 image list --format=json                                                                                                                                                                                               │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ pause   │ -p old-k8s-version-094398 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p no-preload-963457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-497722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ image   │ embed-certs-912564 image list --format=json                                                                                                                                                                                                   │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p embed-certs-912564 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:36:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:36:56.118033  631392 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:36:56.118317  631392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:56.118328  631392 out.go:374] Setting ErrFile to fd 2...
	I1227 09:36:56.118332  631392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:56.118604  631392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:36:56.119089  631392 out.go:368] Setting JSON to false
	I1227 09:36:56.120292  631392 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4760,"bootTime":1766823456,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:36:56.120351  631392 start.go:143] virtualization: kvm guest
	I1227 09:36:56.122005  631392 out.go:179] * [default-k8s-diff-port-497722] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:36:56.123168  631392 notify.go:221] Checking for updates...
	I1227 09:36:56.123180  631392 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:36:56.124207  631392 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:36:56.125641  631392 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:56.126923  631392 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:36:56.127972  631392 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:36:56.129126  631392 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:36:56.130855  631392 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:56.131603  631392 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:36:56.156894  631392 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:36:56.156995  631392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:56.237033  631392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-27 09:36:56.225326698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:56.237183  631392 docker.go:319] overlay module found
	I1227 09:36:56.238784  631392 out.go:179] * Using the docker driver based on existing profile
	I1227 09:36:56.239920  631392 start.go:309] selected driver: docker
	I1227 09:36:56.239938  631392 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:56.240055  631392 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:36:56.240864  631392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:56.311407  631392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 09:36:56.301965993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:56.311684  631392 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:56.311714  631392 cni.go:84] Creating CNI manager for ""
	I1227 09:36:56.311779  631392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:56.311860  631392 start.go:353] cluster config:
	{Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:56.313709  631392 out.go:179] * Starting "default-k8s-diff-port-497722" primary control-plane node in "default-k8s-diff-port-497722" cluster
	I1227 09:36:56.314728  631392 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:36:56.319525  631392 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:36:51.503987  630355 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:36:51.504270  630355 start.go:159] libmachine.API.Create for "newest-cni-246956" (driver="docker")
	I1227 09:36:51.504305  630355 client.go:173] LocalClient.Create starting
	I1227 09:36:51.504380  630355 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:36:51.504418  630355 main.go:144] libmachine: Decoding PEM data...
	I1227 09:36:51.504445  630355 main.go:144] libmachine: Parsing certificate...
	I1227 09:36:51.504530  630355 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:36:51.504560  630355 main.go:144] libmachine: Decoding PEM data...
	I1227 09:36:51.504578  630355 main.go:144] libmachine: Parsing certificate...
	I1227 09:36:51.505013  630355 cli_runner.go:164] Run: docker network inspect newest-cni-246956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:36:51.521118  630355 cli_runner.go:211] docker network inspect newest-cni-246956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:36:51.521200  630355 network_create.go:284] running [docker network inspect newest-cni-246956] to gather additional debugging logs...
	I1227 09:36:51.521226  630355 cli_runner.go:164] Run: docker network inspect newest-cni-246956
	W1227 09:36:51.537389  630355 cli_runner.go:211] docker network inspect newest-cni-246956 returned with exit code 1
	I1227 09:36:51.537414  630355 network_create.go:287] error running [docker network inspect newest-cni-246956]: docker network inspect newest-cni-246956: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-246956 not found
	I1227 09:36:51.537439  630355 network_create.go:289] output of [docker network inspect newest-cni-246956]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-246956 not found
	
	** /stderr **
	I1227 09:36:51.537527  630355 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:51.553978  630355 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
	I1227 09:36:51.554821  630355 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-21a699476be6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:e8:d9:95:e6:36} reservation:<nil>}
	I1227 09:36:51.555324  630355 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e97c5356905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:d9:6b:42:f5:e3} reservation:<nil>}
	I1227 09:36:51.556124  630355 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e560a0}
	I1227 09:36:51.556148  630355 network_create.go:124] attempt to create docker network newest-cni-246956 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:36:51.556202  630355 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-246956 newest-cni-246956
	I1227 09:36:51.601256  630355 network_create.go:108] docker network newest-cni-246956 192.168.76.0/24 created
	I1227 09:36:51.601292  630355 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-246956" container
	I1227 09:36:51.601382  630355 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:36:51.618040  630355 cli_runner.go:164] Run: docker volume create newest-cni-246956 --label name.minikube.sigs.k8s.io=newest-cni-246956 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:36:51.634779  630355 oci.go:103] Successfully created a docker volume newest-cni-246956
	I1227 09:36:51.634906  630355 cli_runner.go:164] Run: docker run --rm --name newest-cni-246956-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-246956 --entrypoint /usr/bin/test -v newest-cni-246956:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:36:51.985470  630355 oci.go:107] Successfully prepared a docker volume newest-cni-246956
	I1227 09:36:51.985539  630355 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:51.985556  630355 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:36:51.985607  630355 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-246956:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:36:55.783686  630355 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-246956:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.798014883s)
	I1227 09:36:55.783722  630355 kic.go:203] duration metric: took 3.798163626s to extract preloaded images to volume ...
	W1227 09:36:55.783877  630355 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:36:55.783911  630355 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:36:55.783950  630355 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:36:55.845043  630355 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-246956 --name newest-cni-246956 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-246956 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-246956 --network newest-cni-246956 --ip 192.168.76.2 --volume newest-cni-246956:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:36:56.141349  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Running}}
	I1227 09:36:56.161926  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:36:56.186737  630355 cli_runner.go:164] Run: docker exec newest-cni-246956 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:36:56.251448  630355 oci.go:144] the created container "newest-cni-246956" has a running status.
	I1227 09:36:56.251484  630355 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa...
	I1227 09:36:56.320494  631392 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:56.320535  631392 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:36:56.320544  631392 cache.go:65] Caching tarball of preloaded images
	I1227 09:36:56.320642  631392 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:36:56.320635  631392 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:36:56.320657  631392 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:36:56.320859  631392 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/config.json ...
	I1227 09:36:56.345922  631392 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:36:56.345947  631392 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:36:56.345968  631392 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:36:56.346010  631392 start.go:360] acquireMachinesLock for default-k8s-diff-port-497722: {Name:mk952cc47ec82ed9310014186e6e4270fbb3e58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:36:56.346079  631392 start.go:364] duration metric: took 44.824µs to acquireMachinesLock for "default-k8s-diff-port-497722"
	I1227 09:36:56.346102  631392 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:36:56.346112  631392 fix.go:54] fixHost starting: 
	I1227 09:36:56.346414  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:36:56.365133  631392 fix.go:112] recreateIfNeeded on default-k8s-diff-port-497722: state=Stopped err=<nil>
	W1227 09:36:56.365221  631392 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:36:55.892570  629532 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:36:55.892596  629532 machine.go:97] duration metric: took 5.581836028s to provisionDockerMachine
	I1227 09:36:55.892610  629532 start.go:293] postStartSetup for "no-preload-963457" (driver="docker")
	I1227 09:36:55.892621  629532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:36:55.892671  629532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:36:55.892708  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:55.914280  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.011927  629532 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:36:56.015740  629532 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:36:56.015765  629532 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:36:56.015778  629532 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:36:56.015885  629532 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:36:56.015989  629532 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:36:56.016101  629532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:36:56.024943  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:56.046067  629532 start.go:296] duration metric: took 153.444971ms for postStartSetup
	I1227 09:36:56.046157  629532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:56.046226  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:56.065042  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.156611  629532 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:36:56.161967  629532 fix.go:56] duration metric: took 6.41436493s for fixHost
	I1227 09:36:56.161992  629532 start.go:83] releasing machines lock for "no-preload-963457", held for 6.414414383s
	I1227 09:36:56.162052  629532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-963457
	I1227 09:36:56.188154  629532 ssh_runner.go:195] Run: cat /version.json
	I1227 09:36:56.188215  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:56.188464  629532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:36:56.188765  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:56.223568  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.225022  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.390845  629532 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:56.399342  629532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:36:56.448678  629532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:36:56.454437  629532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:36:56.454505  629532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:36:56.464966  629532 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:36:56.464988  629532 start.go:496] detecting cgroup driver to use...
	I1227 09:36:56.465019  629532 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:36:56.465068  629532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:36:56.498904  629532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:36:56.522095  629532 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:36:56.522154  629532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:36:56.554225  629532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:36:56.572425  629532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:36:56.679708  629532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:36:56.789709  629532 docker.go:234] disabling docker service ...
	I1227 09:36:56.789778  629532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:36:56.806829  629532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:36:56.820513  629532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:36:56.923496  629532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:36:57.030200  629532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:36:57.043639  629532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:36:57.058019  629532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:36:57.058082  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.067538  629532 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:36:57.067598  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.077318  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.085917  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.094193  629532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:36:57.101639  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.110030  629532 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.117710  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.126967  629532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:36:57.133883  629532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:36:57.141132  629532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:57.224153  629532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:36:57.360012  629532 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:36:57.360088  629532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:36:57.364319  629532 start.go:574] Will wait 60s for crictl version
	I1227 09:36:57.364375  629532 ssh_runner.go:195] Run: which crictl
	I1227 09:36:57.367811  629532 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:36:57.391321  629532 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:36:57.391394  629532 ssh_runner.go:195] Run: crio --version
	I1227 09:36:57.421171  629532 ssh_runner.go:195] Run: crio --version
	I1227 09:36:57.452635  629532 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:36:57.453610  629532 cli_runner.go:164] Run: docker network inspect no-preload-963457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:57.471362  629532 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 09:36:57.475352  629532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:57.485498  629532 kubeadm.go:884] updating cluster {Name:no-preload-963457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-963457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:36:57.485606  629532 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:57.485644  629532 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:57.516604  629532 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:57.516626  629532 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:36:57.516634  629532 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 09:36:57.516744  629532 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-963457 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-963457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:36:57.516854  629532 ssh_runner.go:195] Run: crio config
	I1227 09:36:57.561627  629532 cni.go:84] Creating CNI manager for ""
	I1227 09:36:57.561649  629532 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:57.561667  629532 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:36:57.561699  629532 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-963457 NodeName:no-preload-963457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:36:57.561892  629532 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-963457"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:36:57.561977  629532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:36:57.570489  629532 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:36:57.570544  629532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:36:57.579475  629532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 09:36:57.592242  629532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:36:57.604718  629532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1227 09:36:57.617292  629532 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:36:57.621167  629532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:57.631391  629532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:57.717314  629532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:57.743061  629532 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457 for IP: 192.168.85.2
	I1227 09:36:57.743088  629532 certs.go:195] generating shared ca certs ...
	I1227 09:36:57.743111  629532 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:57.743279  629532 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:36:57.743330  629532 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:36:57.743343  629532 certs.go:257] generating profile certs ...
	I1227 09:36:57.743479  629532 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/client.key
	I1227 09:36:57.743563  629532 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key.7eac886d
	I1227 09:36:57.743621  629532 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.key
	I1227 09:36:57.743760  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:36:57.743831  629532 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:36:57.743845  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:36:57.743879  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:36:57.743916  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:36:57.743950  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:36:57.744006  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:57.744846  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:36:57.763692  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:36:57.782669  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:36:57.803981  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:36:57.828529  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 09:36:57.848835  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:36:57.866897  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:36:57.883743  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:36:57.900146  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:36:57.916751  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:36:57.934086  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:36:57.952366  629532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:36:57.966505  629532 ssh_runner.go:195] Run: openssl version
	I1227 09:36:57.975156  629532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:57.984628  629532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:36:57.993907  629532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:57.998878  629532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:57.998931  629532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:58.039453  629532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:36:58.046838  629532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.053745  629532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:36:58.060929  629532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.064401  629532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.064454  629532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.100242  629532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:36:58.107476  629532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.114303  629532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:36:58.122260  629532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.125672  629532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.125718  629532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.160416  629532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:36:58.167633  629532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:36:58.171634  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:36:58.211068  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:36:58.251576  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:36:58.300366  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:36:58.353707  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:36:58.409756  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:36:58.450831  629532 kubeadm.go:401] StartCluster: {Name:no-preload-963457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-963457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:58.450953  629532 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:36:58.451037  629532 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:36:58.486940  629532 cri.go:96] found id: "3e2d56e4ec07d845c73c075df87007dd294313f8f25d93e2e062eae21343461c"
	I1227 09:36:58.487007  629532 cri.go:96] found id: "0edde0ef003566fce4556ceb0a3d7cbc04d2cd6685f5afd803595f3ababb1338"
	I1227 09:36:58.487016  629532 cri.go:96] found id: "03b54f84cfa7e0b506b3122cd323cd0db22c3c4310cfedd9769eeb770ec9a426"
	I1227 09:36:58.487021  629532 cri.go:96] found id: "716a7952d1fa9945a526436df75297cbf883fb889ba62f53b3ae1e94790bfeaa"
	I1227 09:36:58.487067  629532 cri.go:96] found id: ""
	I1227 09:36:58.487122  629532 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:36:58.499274  629532 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:58Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:36:58.499327  629532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:36:58.507652  629532 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:36:58.507673  629532 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:36:58.507717  629532 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:36:58.515112  629532 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:36:58.515843  629532 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-963457" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:58.516271  629532 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-963457" cluster setting kubeconfig missing "no-preload-963457" context setting]
	I1227 09:36:58.516950  629532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:58.518808  629532 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:36:58.526405  629532 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 09:36:58.526432  629532 kubeadm.go:602] duration metric: took 18.753157ms to restartPrimaryControlPlane
	I1227 09:36:58.526441  629532 kubeadm.go:403] duration metric: took 75.626448ms to StartCluster
	I1227 09:36:58.526457  629532 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:58.526521  629532 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:58.527618  629532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:58.527872  629532 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:36:58.527997  629532 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:36:58.528107  629532 addons.go:70] Setting storage-provisioner=true in profile "no-preload-963457"
	I1227 09:36:58.528134  629532 addons.go:239] Setting addon storage-provisioner=true in "no-preload-963457"
	I1227 09:36:58.528133  629532 config.go:182] Loaded profile config "no-preload-963457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	W1227 09:36:58.528143  629532 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:36:58.528150  629532 addons.go:70] Setting dashboard=true in profile "no-preload-963457"
	I1227 09:36:58.528157  629532 addons.go:70] Setting default-storageclass=true in profile "no-preload-963457"
	I1227 09:36:58.528178  629532 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-963457"
	I1227 09:36:58.528184  629532 addons.go:239] Setting addon dashboard=true in "no-preload-963457"
	W1227 09:36:58.528193  629532 addons.go:248] addon dashboard should already be in state true
	I1227 09:36:58.528196  629532 host.go:66] Checking if "no-preload-963457" exists ...
	I1227 09:36:58.528219  629532 host.go:66] Checking if "no-preload-963457" exists ...
	I1227 09:36:58.528519  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.528685  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.528697  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.529707  629532 out.go:179] * Verifying Kubernetes components...
	I1227 09:36:58.530836  629532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:58.557409  629532 addons.go:239] Setting addon default-storageclass=true in "no-preload-963457"
	W1227 09:36:58.557440  629532 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:36:58.557472  629532 host.go:66] Checking if "no-preload-963457" exists ...
	I1227 09:36:58.558777  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.558787  629532 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 09:36:58.559492  629532 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:36:58.562458  629532 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:36:56.489729  630355 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:36:56.526840  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:36:56.552039  630355 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:36:56.552068  630355 kic_runner.go:114] Args: [docker exec --privileged newest-cni-246956 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:36:56.617818  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:36:56.638019  630355 machine.go:94] provisionDockerMachine start ...
	I1227 09:36:56.638109  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:56.659481  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.659711  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:56.659723  630355 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:36:56.792984  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-246956
	
	I1227 09:36:56.793019  630355 ubuntu.go:182] provisioning hostname "newest-cni-246956"
	I1227 09:36:56.793088  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:56.815143  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.815483  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:56.815506  630355 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-246956 && echo "newest-cni-246956" | sudo tee /etc/hostname
	I1227 09:36:56.968737  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-246956
	
	I1227 09:36:56.968893  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:56.992239  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.992470  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:56.992489  630355 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-246956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-246956/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-246956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:36:57.122046  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:36:57.122079  630355 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:36:57.122127  630355 ubuntu.go:190] setting up certificates
	I1227 09:36:57.122138  630355 provision.go:84] configureAuth start
	I1227 09:36:57.122216  630355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:36:57.142307  630355 provision.go:143] copyHostCerts
	I1227 09:36:57.142360  630355 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:36:57.142370  630355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:36:57.142423  630355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:36:57.142512  630355 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:36:57.142521  630355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:36:57.142546  630355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:36:57.142616  630355 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:36:57.142623  630355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:36:57.142648  630355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:36:57.142706  630355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.newest-cni-246956 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-246956]
	I1227 09:36:57.212931  630355 provision.go:177] copyRemoteCerts
	I1227 09:36:57.212987  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:36:57.213033  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.230924  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.325527  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:36:57.343993  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 09:36:57.361059  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:36:57.378461  630355 provision.go:87] duration metric: took 256.298706ms to configureAuth
	I1227 09:36:57.378484  630355 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:36:57.378677  630355 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:57.378826  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.397931  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:57.398243  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:57.398266  630355 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:36:57.667097  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:36:57.667129  630355 machine.go:97] duration metric: took 1.02908483s to provisionDockerMachine
	I1227 09:36:57.667142  630355 client.go:176] duration metric: took 6.162825502s to LocalClient.Create
	I1227 09:36:57.667182  630355 start.go:167] duration metric: took 6.162896704s to libmachine.API.Create "newest-cni-246956"
	I1227 09:36:57.667192  630355 start.go:293] postStartSetup for "newest-cni-246956" (driver="docker")
	I1227 09:36:57.667204  630355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:36:57.667353  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:36:57.667440  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.688032  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.781111  630355 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:36:57.785094  630355 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:36:57.785137  630355 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:36:57.785152  630355 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:36:57.785207  630355 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:36:57.785305  630355 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:36:57.785438  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:36:57.793222  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:57.817083  630355 start.go:296] duration metric: took 149.877387ms for postStartSetup
	I1227 09:36:57.817500  630355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:36:57.842720  630355 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/config.json ...
	I1227 09:36:57.842997  630355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:57.843039  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.861694  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.950266  630355 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:36:57.955115  630355 start.go:128] duration metric: took 6.45260447s to createHost
	I1227 09:36:57.955139  630355 start.go:83] releasing machines lock for "newest-cni-246956", held for 6.452757416s
	I1227 09:36:57.955207  630355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:36:57.976745  630355 ssh_runner.go:195] Run: cat /version.json
	I1227 09:36:57.976812  630355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:36:57.976893  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.976938  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.996141  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.997139  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:58.139611  630355 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:58.145675  630355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:36:58.181051  630355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:36:58.185484  630355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:36:58.185559  630355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:36:58.210594  630355 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 09:36:58.210618  630355 start.go:496] detecting cgroup driver to use...
	I1227 09:36:58.210653  630355 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:36:58.210713  630355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:36:58.227384  630355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:36:58.238872  630355 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:36:58.238929  630355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:36:58.260938  630355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:36:58.283057  630355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:36:58.414499  630355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:36:58.538586  630355 docker.go:234] disabling docker service ...
	I1227 09:36:58.538673  630355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:36:58.586101  630355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:36:58.605375  630355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:36:58.705180  630355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:36:58.819410  630355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:36:58.832661  630355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:36:58.850388  630355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:36:58.850452  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.862728  630355 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:36:58.862856  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.873915  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.883825  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.894396  630355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:36:58.903928  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.915832  630355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.931065  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.941511  630355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:36:58.950306  630355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:36:58.957971  630355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:59.055197  630355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:36:59.200763  630355 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:36:59.200870  630355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:36:59.205226  630355 start.go:574] Will wait 60s for crictl version
	I1227 09:36:59.205294  630355 ssh_runner.go:195] Run: which crictl
	I1227 09:36:59.209253  630355 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:36:59.235124  630355 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:36:59.235211  630355 ssh_runner.go:195] Run: crio --version
	I1227 09:36:59.266439  630355 ssh_runner.go:195] Run: crio --version
	I1227 09:36:59.304407  630355 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:36:59.305344  630355 cli_runner.go:164] Run: docker network inspect newest-cni-246956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:59.325345  630355 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:36:59.329616  630355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:59.343894  630355 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 09:36:58.562497  629532 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:58.562512  629532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:36:58.562565  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:58.563435  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:36:58.563457  629532 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:36:58.563515  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:58.586081  629532 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:58.586106  629532 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:36:58.586165  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:58.597636  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:58.600166  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:58.615813  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:58.683845  629532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:58.696212  629532 node_ready.go:35] waiting up to 6m0s for node "no-preload-963457" to be "Ready" ...
	I1227 09:36:58.716769  629532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:58.717072  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:36:58.717091  629532 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:36:58.722474  629532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:58.732902  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:36:58.732921  629532 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:36:58.756653  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:36:58.756700  629532 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:36:58.775246  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:36:58.775273  629532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:36:58.791178  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:36:58.791220  629532 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:36:58.806784  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:36:58.806865  629532 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:36:58.821301  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:36:58.821323  629532 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:36:58.835038  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:36:58.835059  629532 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:36:58.851360  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:36:58.851383  629532 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:36:58.866009  629532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1227 09:36:56.752824  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	I1227 09:36:58.252524  622335 pod_ready.go:94] pod "coredns-7d764666f9-vm5hp" is "Ready"
	I1227 09:36:58.252556  622335 pod_ready.go:86] duration metric: took 32.507379919s for pod "coredns-7d764666f9-vm5hp" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.255267  622335 pod_ready.go:83] waiting for pod "etcd-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.260388  622335 pod_ready.go:94] pod "etcd-embed-certs-912564" is "Ready"
	I1227 09:36:58.260428  622335 pod_ready.go:86] duration metric: took 5.133413ms for pod "etcd-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.263042  622335 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.269170  622335 pod_ready.go:94] pod "kube-apiserver-embed-certs-912564" is "Ready"
	I1227 09:36:58.269195  622335 pod_ready.go:86] duration metric: took 6.12908ms for pod "kube-apiserver-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.271334  622335 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.451056  622335 pod_ready.go:94] pod "kube-controller-manager-embed-certs-912564" is "Ready"
	I1227 09:36:58.451082  622335 pod_ready.go:86] duration metric: took 179.728256ms for pod "kube-controller-manager-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.650121  622335 pod_ready.go:83] waiting for pod "kube-proxy-dv8ch" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.050144  622335 pod_ready.go:94] pod "kube-proxy-dv8ch" is "Ready"
	I1227 09:36:59.050170  622335 pod_ready.go:86] duration metric: took 400.019705ms for pod "kube-proxy-dv8ch" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.249302  622335 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.649616  622335 pod_ready.go:94] pod "kube-scheduler-embed-certs-912564" is "Ready"
	I1227 09:36:59.649652  622335 pod_ready.go:86] duration metric: took 400.318884ms for pod "kube-scheduler-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.649667  622335 pod_ready.go:40] duration metric: took 33.907675392s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:59.704219  622335 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:36:59.705864  622335 out.go:179] * Done! kubectl is now configured to use "embed-certs-912564" cluster and "default" namespace by default
	I1227 09:36:59.788339  629532 node_ready.go:49] node "no-preload-963457" is "Ready"
	I1227 09:36:59.788374  629532 node_ready.go:38] duration metric: took 1.092117451s for node "no-preload-963457" to be "Ready" ...
	I1227 09:36:59.788394  629532 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:59.788452  629532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:37:00.485897  629532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.769088985s)
	I1227 09:37:00.485927  629532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.763421495s)
	I1227 09:37:00.486068  629532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.620019639s)
	I1227 09:37:00.486186  629532 api_server.go:72] duration metric: took 1.958263237s to wait for apiserver process to appear ...
	I1227 09:37:00.486206  629532 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:37:00.486232  629532 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:37:00.488270  629532 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-963457 addons enable metrics-server
	
	I1227 09:37:00.491676  629532 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:37:00.491700  629532 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:37:00.493717  629532 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:36:59.344975  630355 kubeadm.go:884] updating cluster {Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:36:59.345101  630355 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:59.345149  630355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:59.384759  630355 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:59.384782  630355 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:36:59.384849  630355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:59.410055  630355 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:59.410078  630355 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:36:59.410088  630355 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 09:36:59.410204  630355 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-246956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:36:59.410294  630355 ssh_runner.go:195] Run: crio config
	I1227 09:36:59.456322  630355 cni.go:84] Creating CNI manager for ""
	I1227 09:36:59.456350  630355 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:59.456368  630355 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 09:36:59.456397  630355 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-246956 NodeName:newest-cni-246956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:36:59.456523  630355 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-246956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:36:59.456584  630355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:36:59.466669  630355 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:36:59.466742  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:36:59.475652  630355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 09:36:59.488517  630355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:36:59.502920  630355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1227 09:36:59.515008  630355 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:36:59.518524  630355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:59.528038  630355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:59.624983  630355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:59.660589  630355 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956 for IP: 192.168.76.2
	I1227 09:36:59.660613  630355 certs.go:195] generating shared ca certs ...
	I1227 09:36:59.660633  630355 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:59.660905  630355 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:36:59.661015  630355 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:36:59.661035  630355 certs.go:257] generating profile certs ...
	I1227 09:36:59.661115  630355 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.key
	I1227 09:36:59.661143  630355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.crt with IP's: []
	I1227 09:36:59.788963  630355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.crt ...
	I1227 09:36:59.789056  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.crt: {Name:mke160e795be5819fc64a4cfdc99d30cbaf7ac78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:59.789341  630355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.key ...
	I1227 09:36:59.789401  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.key: {Name:mked296971a2b1adfd827807ea9bcfac542a6198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:59.789603  630355 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc
	I1227 09:36:59.789628  630355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:37:00.007987  630355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc ...
	I1227 09:37:00.008015  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc: {Name:mkf106bea43ddce33073679b38a2435ae123204d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.008208  630355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc ...
	I1227 09:37:00.008231  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc: {Name:mk7a7a619839a917a7bc295106055593f103712f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.008348  630355 certs.go:382] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt
	I1227 09:37:00.008443  630355 certs.go:386] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key
	I1227 09:37:00.008507  630355 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key
	I1227 09:37:00.008521  630355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt with IP's: []
	I1227 09:37:00.065399  630355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt ...
	I1227 09:37:00.065432  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt: {Name:mk53170e13da64d8c60c92c2979a2d1722947a2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.065641  630355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key ...
	I1227 09:37:00.065669  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key: {Name:mk035fe76dc91ae603b8c29c1b707b2402dd30b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.065955  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:00.066015  630355 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:00.066033  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:00.066073  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:00.066115  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:00.066153  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:00.066212  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:00.066935  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:00.090536  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:00.114426  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:00.139622  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:00.168025  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 09:37:00.199153  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:00.230695  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:00.258683  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:37:00.286125  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:00.307110  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:00.329272  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:00.353282  630355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:00.370493  630355 ssh_runner.go:195] Run: openssl version
	I1227 09:37:00.378974  630355 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.386715  630355 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:00.395766  630355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.400298  630355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.400359  630355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.438423  630355 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:00.446991  630355 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3771712.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:00.455583  630355 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.464747  630355 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:00.475422  630355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.480200  630355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.480263  630355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.528725  630355 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:00.536448  630355 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:37:00.543666  630355 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.550688  630355 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:00.557833  630355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.561504  630355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.561559  630355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.597670  630355 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:00.605630  630355 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/377171.pem /etc/ssl/certs/51391683.0
	I1227 09:37:00.613503  630355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:00.617430  630355 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:37:00.617489  630355 kubeadm.go:401] StartCluster: {Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:00.617574  630355 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:00.617633  630355 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:00.648921  630355 cri.go:96] found id: ""
	I1227 09:37:00.648987  630355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:00.657821  630355 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:37:00.665951  630355 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:37:00.666012  630355 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:37:00.674118  630355 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:37:00.674134  630355 kubeadm.go:158] found existing configuration files:
	
	I1227 09:37:00.674176  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:37:00.681708  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:37:00.681768  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:37:00.689291  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:37:00.697145  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:37:00.697203  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:37:00.705396  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:37:00.714137  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:37:00.714200  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:37:00.723361  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:37:00.732948  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:37:00.733004  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:37:00.741856  630355 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:37:00.788203  630355 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:37:00.788305  630355 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:37:00.868108  630355 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:37:00.868200  630355 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 09:37:00.868288  630355 kubeadm.go:319] OS: Linux
	I1227 09:37:00.868365  630355 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:37:00.868440  630355 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:37:00.868658  630355 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:37:00.868734  630355 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:37:00.868816  630355 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:37:00.868893  630355 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:37:00.868964  630355 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:37:00.869016  630355 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 09:37:00.945222  630355 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:37:00.945395  630355 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:37:00.945534  630355 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:37:00.953623  630355 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:36:56.366926  631392 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-497722" ...
	I1227 09:36:56.367007  631392 cli_runner.go:164] Run: docker start default-k8s-diff-port-497722
	I1227 09:36:56.696501  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:36:56.724853  631392 kic.go:430] container "default-k8s-diff-port-497722" state is running.
	I1227 09:36:56.725355  631392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:36:56.748319  631392 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/config.json ...
	I1227 09:36:56.748736  631392 machine.go:94] provisionDockerMachine start ...
	I1227 09:36:56.748860  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:36:56.769479  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.769885  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:36:56.769906  631392 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:36:56.770811  631392 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56300->127.0.0.1:33468: read: connection reset by peer
	I1227 09:36:59.952704  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-497722
	
	I1227 09:36:59.952731  631392 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-497722"
	I1227 09:36:59.952803  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:36:59.977726  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:59.978041  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:36:59.978072  631392 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-497722 && echo "default-k8s-diff-port-497722" | sudo tee /etc/hostname
	I1227 09:37:00.132462  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-497722
	
	I1227 09:37:00.132551  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:00.162410  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:00.162741  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:37:00.162763  631392 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-497722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-497722/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-497722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:37:00.320889  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:37:00.321032  631392 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:37:00.321059  631392 ubuntu.go:190] setting up certificates
	I1227 09:37:00.321086  631392 provision.go:84] configureAuth start
	I1227 09:37:00.321152  631392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:37:00.345021  631392 provision.go:143] copyHostCerts
	I1227 09:37:00.345085  631392 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:37:00.345108  631392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:37:00.345193  631392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:37:00.345342  631392 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:37:00.345358  631392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:37:00.345408  631392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:37:00.345527  631392 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:37:00.345542  631392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:37:00.345633  631392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:37:00.345740  631392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-497722 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-497722 localhost minikube]
	I1227 09:37:00.442570  631392 provision.go:177] copyRemoteCerts
	I1227 09:37:00.442624  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:37:00.442658  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:00.466058  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:00.564061  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:37:00.581525  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 09:37:00.598744  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:37:00.617306  631392 provision.go:87] duration metric: took 296.195744ms to configureAuth
	I1227 09:37:00.617333  631392 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:37:00.617559  631392 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:00.617677  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:00.639063  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:00.639350  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:37:00.639374  631392 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:37:00.992976  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:37:00.993002  631392 machine.go:97] duration metric: took 4.244244423s to provisionDockerMachine
	I1227 09:37:00.993015  631392 start.go:293] postStartSetup for "default-k8s-diff-port-497722" (driver="docker")
	I1227 09:37:00.993027  631392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:37:00.993100  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:37:00.993147  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.014724  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.108368  631392 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:37:01.111898  631392 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:37:01.111923  631392 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:37:01.111934  631392 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:37:01.111974  631392 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:37:01.112059  631392 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:37:01.112148  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:37:00.957070  630355 out.go:252]   - Generating certificates and keys ...
	I1227 09:37:00.957171  630355 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:37:00.957258  630355 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:37:01.046705  630355 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:37:01.170369  630355 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:37:01.237679  630355 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:37:01.119477  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:01.136366  631392 start.go:296] duration metric: took 143.321557ms for postStartSetup
	I1227 09:37:01.136451  631392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:37:01.136488  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.156365  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.245668  631392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:37:01.250596  631392 fix.go:56] duration metric: took 4.904478049s for fixHost
	I1227 09:37:01.250623  631392 start.go:83] releasing machines lock for "default-k8s-diff-port-497722", held for 4.904529799s
	I1227 09:37:01.250702  631392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:37:01.270598  631392 ssh_runner.go:195] Run: cat /version.json
	I1227 09:37:01.270653  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.270708  631392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:37:01.270815  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.289307  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.291072  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.376675  631392 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:01.432189  631392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:37:01.470826  631392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:37:01.475338  631392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:37:01.475421  631392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:37:01.483037  631392 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:37:01.483059  631392 start.go:496] detecting cgroup driver to use...
	I1227 09:37:01.483091  631392 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:37:01.483133  631392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:37:01.498478  631392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:37:01.511191  631392 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:37:01.511242  631392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:37:01.526955  631392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:37:01.540540  631392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:37:01.622776  631392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:37:01.703107  631392 docker.go:234] disabling docker service ...
	I1227 09:37:01.703196  631392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:37:01.717309  631392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:37:01.729344  631392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:37:01.840554  631392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:37:01.935549  631392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:37:01.947578  631392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:37:01.961324  631392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:37:01.961377  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.970001  631392 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:37:01.970098  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.979093  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.989166  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.998100  631392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:37:02.007297  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:02.017288  631392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:02.025859  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:02.035574  631392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:37:02.043526  631392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:37:02.050579  631392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:02.135042  631392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:37:02.277022  631392 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:37:02.277093  631392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:37:02.280991  631392 start.go:574] Will wait 60s for crictl version
	I1227 09:37:02.281049  631392 ssh_runner.go:195] Run: which crictl
	I1227 09:37:02.284712  631392 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:37:02.310427  631392 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:37:02.310505  631392 ssh_runner.go:195] Run: crio --version
	I1227 09:37:02.339457  631392 ssh_runner.go:195] Run: crio --version
	I1227 09:37:02.369305  631392 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:37:01.372389  630355 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:37:01.413174  630355 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:37:01.413305  630355 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-246956] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:37:01.527664  630355 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:37:01.527805  630355 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-246956] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:37:01.615200  630355 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:37:01.840816  630355 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:37:02.017322  630355 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:37:02.017649  630355 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:37:02.264512  630355 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:37:02.566832  630355 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:37:02.683014  630355 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:37:02.757540  630355 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:37:02.820765  630355 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:37:02.821428  630355 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:37:02.825397  630355 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:37:02.370401  631392 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-497722 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:02.389071  631392 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 09:37:02.393241  631392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:02.403448  631392 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:37:02.403574  631392 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:02.403630  631392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:02.438551  631392 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:02.438590  631392 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:37:02.438667  631392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:02.465209  631392 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:02.465235  631392 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:37:02.465245  631392 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.35.0 crio true true} ...
	I1227 09:37:02.465366  631392 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-497722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:37:02.465461  631392 ssh_runner.go:195] Run: crio config
	I1227 09:37:02.513257  631392 cni.go:84] Creating CNI manager for ""
	I1227 09:37:02.513278  631392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:02.513294  631392 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:37:02.513317  631392 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-497722 NodeName:default-k8s-diff-port-497722 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:37:02.513444  631392 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-497722"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:37:02.513505  631392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:37:02.521923  631392 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:37:02.521985  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:37:02.529554  631392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1227 09:37:02.543277  631392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:37:02.555132  631392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1227 09:37:02.567822  631392 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:37:02.571320  631392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:02.580644  631392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:02.664822  631392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:02.693404  631392 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722 for IP: 192.168.103.2
	I1227 09:37:02.693441  631392 certs.go:195] generating shared ca certs ...
	I1227 09:37:02.693462  631392 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:02.693637  631392 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:37:02.693699  631392 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:37:02.693714  631392 certs.go:257] generating profile certs ...
	I1227 09:37:02.693848  631392 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/client.key
	I1227 09:37:02.693949  631392 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/apiserver.key.70f960dd
	I1227 09:37:02.694002  631392 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/proxy-client.key
	I1227 09:37:02.694163  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:02.694205  631392 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:02.694217  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:02.694258  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:02.694290  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:02.694323  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:02.694385  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:02.695781  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:02.717703  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:02.740004  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:02.760168  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:02.785274  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 09:37:02.808608  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:02.826033  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:02.851489  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:37:02.871940  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:02.895485  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:02.913955  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:02.930694  631392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:02.945635  631392 ssh_runner.go:195] Run: openssl version
	I1227 09:37:02.953715  631392 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:02.961830  631392 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:02.969519  631392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:02.973176  631392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:02.973234  631392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:03.016770  631392 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:03.024510  631392 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.031699  631392 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:03.039836  631392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.043468  631392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.043517  631392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.079383  631392 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:03.086880  631392 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.094441  631392 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:03.102480  631392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.105974  631392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.106034  631392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.140053  631392 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:03.147121  631392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:03.150759  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:37:03.185667  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:37:03.239684  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:37:03.282935  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:37:03.332349  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:37:03.387948  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:37:03.433087  631392 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:03.433200  631392 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:03.433280  631392 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:03.466372  631392 cri.go:96] found id: "f7a05d26251e8a9f4091116e036a79a8654a182636649e96285fc252c0530199"
	I1227 09:37:03.466401  631392 cri.go:96] found id: "05ea82c1bf330e30a293e6d7ea1c01b1766ebce50eeaf8868bbcf622fd71d8e8"
	I1227 09:37:03.466408  631392 cri.go:96] found id: "0c8a5140613f466c8107ca22c4400874507b9d96db9fc14bb7f9ecf967957942"
	I1227 09:37:03.466413  631392 cri.go:96] found id: "0f35c56ae0629feaaf5192c69a9a652f101e67591c4c93f500daa6ceb2a62911"
	I1227 09:37:03.466417  631392 cri.go:96] found id: ""
	I1227 09:37:03.466465  631392 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:37:03.479438  631392 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:03Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:37:03.479541  631392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:03.489354  631392 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:37:03.489372  631392 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:37:03.489431  631392 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:37:03.497550  631392 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:37:03.498709  631392 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-497722" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:03.499508  631392 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-497722" cluster setting kubeconfig missing "default-k8s-diff-port-497722" context setting]
	I1227 09:37:03.500656  631392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:03.503000  631392 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:37:03.510898  631392 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1227 09:37:03.510931  631392 kubeadm.go:602] duration metric: took 21.551771ms to restartPrimaryControlPlane
	I1227 09:37:03.510940  631392 kubeadm.go:403] duration metric: took 77.869263ms to StartCluster
	I1227 09:37:03.510958  631392 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:03.511015  631392 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:03.512421  631392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:03.512668  631392 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:03.512733  631392 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:37:03.512865  631392 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-497722"
	I1227 09:37:03.512886  631392 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:03.512903  631392 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-497722"
	I1227 09:37:03.512924  631392 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-497722"
	I1227 09:37:03.512892  631392 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-497722"
	W1227 09:37:03.512933  631392 addons.go:248] addon dashboard should already be in state true
	W1227 09:37:03.512937  631392 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:37:03.512934  631392 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-497722"
	I1227 09:37:03.512963  631392 host.go:66] Checking if "default-k8s-diff-port-497722" exists ...
	I1227 09:37:03.512991  631392 host.go:66] Checking if "default-k8s-diff-port-497722" exists ...
	I1227 09:37:03.512985  631392 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-497722"
	I1227 09:37:03.513292  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.513452  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.513483  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.515099  631392 out.go:179] * Verifying Kubernetes components...
	I1227 09:37:03.516218  631392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:03.545006  631392 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-497722"
	W1227 09:37:03.545032  631392 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:37:03.545065  631392 host.go:66] Checking if "default-k8s-diff-port-497722" exists ...
	I1227 09:37:03.545116  631392 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:37:03.545116  631392 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 09:37:03.545523  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.546781  631392 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:03.546811  631392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:37:03.546873  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:03.547729  631392 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:37:00.496152  629532 addons.go:530] duration metric: took 1.96813915s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:37:00.986939  629532 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:37:00.991714  629532 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 09:37:00.992754  629532 api_server.go:141] control plane version: v1.35.0
	I1227 09:37:00.992785  629532 api_server.go:131] duration metric: took 506.570525ms to wait for apiserver health ...
	I1227 09:37:00.992822  629532 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:37:00.997469  629532 system_pods.go:59] 8 kube-system pods found
	I1227 09:37:00.999467  629532 system_pods.go:61] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:00.999484  629532 system_pods.go:61] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:00.999494  629532 system_pods.go:61] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:37:00.999504  629532 system_pods.go:61] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:00.999520  629532 system_pods.go:61] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:00.999526  629532 system_pods.go:61] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:37:00.999537  629532 system_pods.go:61] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:00.999542  629532 system_pods.go:61] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Running
	I1227 09:37:00.999552  629532 system_pods.go:74] duration metric: took 6.721122ms to wait for pod list to return data ...
	I1227 09:37:00.999566  629532 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:37:01.002442  629532 default_sa.go:45] found service account: "default"
	I1227 09:37:01.002467  629532 default_sa.go:55] duration metric: took 2.893913ms for default service account to be created ...
	I1227 09:37:01.002476  629532 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:37:01.005574  629532 system_pods.go:86] 8 kube-system pods found
	I1227 09:37:01.005606  629532 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:01.005617  629532 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:01.005636  629532 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:37:01.005647  629532 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:01.005659  629532 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:01.005669  629532 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:37:01.005678  629532 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:01.005686  629532 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Running
	I1227 09:37:01.005694  629532 system_pods.go:126] duration metric: took 3.211017ms to wait for k8s-apps to be running ...
	I1227 09:37:01.005703  629532 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:37:01.005745  629532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:01.019329  629532 system_svc.go:56] duration metric: took 13.615658ms WaitForService to wait for kubelet
	I1227 09:37:01.019357  629532 kubeadm.go:587] duration metric: took 2.491446215s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:37:01.019406  629532 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:37:01.022469  629532 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:37:01.022494  629532 node_conditions.go:123] node cpu capacity is 8
	I1227 09:37:01.022508  629532 node_conditions.go:105] duration metric: took 3.09307ms to run NodePressure ...
	I1227 09:37:01.022519  629532 start.go:242] waiting for startup goroutines ...
	I1227 09:37:01.022526  629532 start.go:247] waiting for cluster config update ...
	I1227 09:37:01.022539  629532 start.go:256] writing updated cluster config ...
	I1227 09:37:01.022825  629532 ssh_runner.go:195] Run: rm -f paused
	I1227 09:37:01.026415  629532 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:37:01.032367  629532 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:37:03.037599  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	I1227 09:37:03.548864  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:37:03.548881  631392 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:37:03.548946  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:03.580601  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:03.585910  631392 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:03.585995  631392 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:37:03.586083  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:03.587524  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:03.611367  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:03.693388  631392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:03.735280  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:37:03.735311  631392 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:37:03.739715  631392 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-497722" to be "Ready" ...
	I1227 09:37:03.745369  631392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:03.756381  631392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:03.772524  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:37:03.772552  631392 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:37:03.796801  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:37:03.796827  631392 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:37:03.830005  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:37:03.830031  631392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:37:03.859155  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:37:03.859261  631392 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:37:03.882955  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:37:03.883028  631392 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:37:03.903109  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:37:03.903135  631392 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:37:03.920352  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:37:03.920389  631392 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:37:03.938041  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:37:03.938067  631392 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:37:03.952702  631392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:37:05.081740  631392 node_ready.go:49] node "default-k8s-diff-port-497722" is "Ready"
	I1227 09:37:05.081784  631392 node_ready.go:38] duration metric: took 1.342025698s for node "default-k8s-diff-port-497722" to be "Ready" ...
	I1227 09:37:05.081817  631392 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:37:05.081879  631392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:37:05.822291  631392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.076880271s)
	I1227 09:37:05.822373  631392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.065940093s)
	I1227 09:37:05.822447  631392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.869714406s)
	I1227 09:37:05.822496  631392 api_server.go:72] duration metric: took 2.309795438s to wait for apiserver process to appear ...
	I1227 09:37:05.822521  631392 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:37:05.822603  631392 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1227 09:37:05.823594  631392 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-497722 addons enable metrics-server
	
	I1227 09:37:05.828585  631392 api_server.go:325] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:37:05.828612  631392 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:37:05.833008  631392 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:37:05.834729  631392 addons.go:530] duration metric: took 2.322002327s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:37:02.827049  630355 out.go:252]   - Booting up control plane ...
	I1227 09:37:02.827173  630355 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:37:02.827659  630355 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:37:02.828658  630355 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:37:02.846003  630355 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:37:02.846144  630355 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:37:02.856948  630355 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:37:02.857081  630355 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:37:02.857138  630355 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:37:02.965432  630355 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:37:02.965600  630355 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:37:03.468145  630355 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.896581ms
	I1227 09:37:03.472302  630355 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 09:37:03.472420  630355 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1227 09:37:03.472580  630355 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 09:37:03.472737  630355 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 09:37:04.484316  630355 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.011409443s
	I1227 09:37:05.492077  630355 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.019567088s
	I1227 09:37:06.973882  630355 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501519063s
	I1227 09:37:06.989241  630355 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 09:37:06.997155  630355 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 09:37:07.005486  630355 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 09:37:07.005698  630355 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-246956 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 09:37:07.013098  630355 kubeadm.go:319] [bootstrap-token] Using token: kw0ne7.e0ofxhotwu7t62i6
	I1227 09:37:07.014365  630355 out.go:252]   - Configuring RBAC rules ...
	I1227 09:37:07.014525  630355 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 09:37:07.017519  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 09:37:07.022777  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 09:37:07.025010  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 09:37:07.028326  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 09:37:07.030560  630355 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 09:37:07.380628  630355 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 09:37:07.798585  630355 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 09:37:08.380637  630355 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 09:37:08.381938  630355 kubeadm.go:319] 
	I1227 09:37:08.382044  630355 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 09:37:08.382060  630355 kubeadm.go:319] 
	I1227 09:37:08.382154  630355 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 09:37:08.382182  630355 kubeadm.go:319] 
	I1227 09:37:08.382220  630355 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 09:37:08.382289  630355 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 09:37:08.382354  630355 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 09:37:08.382366  630355 kubeadm.go:319] 
	I1227 09:37:08.382438  630355 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 09:37:08.382449  630355 kubeadm.go:319] 
	I1227 09:37:08.382507  630355 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 09:37:08.382522  630355 kubeadm.go:319] 
	I1227 09:37:08.382996  630355 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 09:37:08.383196  630355 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 09:37:08.383307  630355 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 09:37:08.383318  630355 kubeadm.go:319] 
	I1227 09:37:08.383460  630355 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 09:37:08.383582  630355 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 09:37:08.383593  630355 kubeadm.go:319] 
	I1227 09:37:08.383722  630355 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kw0ne7.e0ofxhotwu7t62i6 \
	I1227 09:37:08.383907  630355 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 \
	I1227 09:37:08.383937  630355 kubeadm.go:319] 	--control-plane 
	I1227 09:37:08.383942  630355 kubeadm.go:319] 
	I1227 09:37:08.384084  630355 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 09:37:08.384101  630355 kubeadm.go:319] 
	I1227 09:37:08.384257  630355 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kw0ne7.e0ofxhotwu7t62i6 \
	I1227 09:37:08.384453  630355 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 
	I1227 09:37:08.387345  630355 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 09:37:08.387526  630355 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:37:08.387558  630355 cni.go:84] Creating CNI manager for ""
	I1227 09:37:08.387571  630355 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:08.389255  630355 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1227 09:37:05.046412  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:07.538510  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:09.541062  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	I1227 09:37:06.322979  631392 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1227 09:37:06.328435  631392 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1227 09:37:06.329947  631392 api_server.go:141] control plane version: v1.35.0
	I1227 09:37:06.329981  631392 api_server.go:131] duration metric: took 507.399256ms to wait for apiserver health ...
	I1227 09:37:06.329993  631392 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:37:06.333774  631392 system_pods.go:59] 8 kube-system pods found
	I1227 09:37:06.333837  631392 system_pods.go:61] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:06.333861  631392 system_pods.go:61] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:06.333877  631392 system_pods.go:61] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:37:06.333888  631392 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:06.333905  631392 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:06.333918  631392 system_pods.go:61] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:37:06.333929  631392 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:06.333937  631392 system_pods.go:61] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:37:06.333948  631392 system_pods.go:74] duration metric: took 3.947114ms to wait for pod list to return data ...
	I1227 09:37:06.333961  631392 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:37:06.336565  631392 default_sa.go:45] found service account: "default"
	I1227 09:37:06.336587  631392 default_sa.go:55] duration metric: took 2.617601ms for default service account to be created ...
	I1227 09:37:06.336597  631392 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:37:06.339330  631392 system_pods.go:86] 8 kube-system pods found
	I1227 09:37:06.339360  631392 system_pods.go:89] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:06.339372  631392 system_pods.go:89] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:06.339386  631392 system_pods.go:89] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:37:06.339401  631392 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:06.339414  631392 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:06.339422  631392 system_pods.go:89] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:37:06.339436  631392 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:06.339450  631392 system_pods.go:89] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:37:06.339460  631392 system_pods.go:126] duration metric: took 2.854974ms to wait for k8s-apps to be running ...
	I1227 09:37:06.339469  631392 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:37:06.339521  631392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:06.357773  631392 system_svc.go:56] duration metric: took 18.294659ms WaitForService to wait for kubelet
	I1227 09:37:06.357818  631392 kubeadm.go:587] duration metric: took 2.845118615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:37:06.357845  631392 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:37:06.360952  631392 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:37:06.360981  631392 node_conditions.go:123] node cpu capacity is 8
	I1227 09:37:06.361002  631392 node_conditions.go:105] duration metric: took 3.142212ms to run NodePressure ...
	I1227 09:37:06.361018  631392 start.go:242] waiting for startup goroutines ...
	I1227 09:37:06.361047  631392 start.go:247] waiting for cluster config update ...
	I1227 09:37:06.361060  631392 start.go:256] writing updated cluster config ...
	I1227 09:37:06.361365  631392 ssh_runner.go:195] Run: rm -f paused
	I1227 09:37:06.366245  631392 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:37:06.370043  631392 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:37:08.375981  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	W1227 09:37:10.379441  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	I1227 09:37:08.390254  630355 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 09:37:08.395438  630355 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 09:37:08.395460  630355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 09:37:08.411692  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 09:37:08.701041  630355 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 09:37:08.701214  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:08.701332  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-246956 minikube.k8s.io/updated_at=2025_12_27T09_37_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=newest-cni-246956 minikube.k8s.io/primary=true
	I1227 09:37:08.716553  630355 ops.go:34] apiserver oom_adj: -16
	I1227 09:37:08.804282  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:09.304609  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:09.804478  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:10.305050  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:10.804402  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:11.304767  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:11.804674  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:12.305069  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:12.805104  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:13.304712  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:13.404981  630355 kubeadm.go:1114] duration metric: took 4.703817064s to wait for elevateKubeSystemPrivileges
	I1227 09:37:13.405019  630355 kubeadm.go:403] duration metric: took 12.787533089s to StartCluster
	I1227 09:37:13.405045  630355 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:13.405126  630355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:13.407805  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:13.408105  630355 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:13.408219  630355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 09:37:13.408237  630355 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:37:13.408318  630355 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-246956"
	I1227 09:37:13.408346  630355 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-246956"
	I1227 09:37:13.408385  630355 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:13.408436  630355 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:13.408377  630355 addons.go:70] Setting default-storageclass=true in profile "newest-cni-246956"
	I1227 09:37:13.408494  630355 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-246956"
	I1227 09:37:13.408941  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:13.408985  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:13.410120  630355 out.go:179] * Verifying Kubernetes components...
	I1227 09:37:13.411386  630355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:13.436409  630355 addons.go:239] Setting addon default-storageclass=true in "newest-cni-246956"
	I1227 09:37:13.436462  630355 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:13.436486  630355 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:37:13.436995  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:13.437815  630355 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:13.437836  630355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:37:13.437890  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:13.470222  630355 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:13.470333  630355 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:37:13.470460  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:13.470880  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:13.501499  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:13.522610  630355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 09:37:13.595670  630355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:13.601932  630355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:13.631849  630355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:13.821310  630355 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 09:37:14.024585  630355 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:37:14.024669  630355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:37:14.038954  630355 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 09:37:14.039897  630355 addons.go:530] duration metric: took 631.665184ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 09:37:14.044768  630355 api_server.go:72] duration metric: took 636.623759ms to wait for apiserver process to appear ...
	I1227 09:37:14.044806  630355 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:37:14.044828  630355 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:37:14.051156  630355 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 09:37:14.052113  630355 api_server.go:141] control plane version: v1.35.0
	I1227 09:37:14.052147  630355 api_server.go:131] duration metric: took 7.331844ms to wait for apiserver health ...
	I1227 09:37:14.052157  630355 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:37:14.055482  630355 system_pods.go:59] 8 kube-system pods found
	I1227 09:37:14.055525  630355 system_pods.go:61] "coredns-7d764666f9-kqzph" [cd4faccb-5994-46cb-a83b-d554df2fb8f2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 09:37:14.055546  630355 system_pods.go:61] "etcd-newest-cni-246956" [26721526-906a-4949-a50f-92ea210b80be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:14.055559  630355 system_pods.go:61] "kindnet-lmtxw" [e2185b04-5cba-4c54-86e0-9c2515f95074] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:37:14.055572  630355 system_pods.go:61] "kube-apiserver-newest-cni-246956" [7e3043fd-edc4-4182-8659-eba54f67a2d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:14.055582  630355 system_pods.go:61] "kube-controller-manager-newest-cni-246956" [7a30adc9-ce06-4908-8f0b-ed3da78f6394] Running
	I1227 09:37:14.055591  630355 system_pods.go:61] "kube-proxy-65ltj" [a1e5773a-e15f-405b-bca5-62a52d6e83a2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:37:14.055603  630355 system_pods.go:61] "kube-scheduler-newest-cni-246956" [e515cbde-415b-4a69-b0be-a4c87c86858e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:14.055620  630355 system_pods.go:61] "storage-provisioner" [0735bc86-6017-4c08-8562-4a36fe686929] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 09:37:14.055629  630355 system_pods.go:74] duration metric: took 3.463288ms to wait for pod list to return data ...
	I1227 09:37:14.055639  630355 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:37:14.058131  630355 default_sa.go:45] found service account: "default"
	I1227 09:37:14.058152  630355 default_sa.go:55] duration metric: took 2.506015ms for default service account to be created ...
	I1227 09:37:14.058166  630355 kubeadm.go:587] duration metric: took 650.02674ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 09:37:14.058190  630355 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:37:14.060558  630355 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:37:14.060580  630355 node_conditions.go:123] node cpu capacity is 8
	I1227 09:37:14.060600  630355 node_conditions.go:105] duration metric: took 2.400745ms to run NodePressure ...
	I1227 09:37:14.060615  630355 start.go:242] waiting for startup goroutines ...
	I1227 09:37:14.325652  630355 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-246956" context rescaled to 1 replicas
	I1227 09:37:14.325757  630355 start.go:247] waiting for cluster config update ...
	I1227 09:37:14.325781  630355 start.go:256] writing updated cluster config ...
	I1227 09:37:14.326218  630355 ssh_runner.go:195] Run: rm -f paused
	I1227 09:37:14.386984  630355 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:37:14.389355  630355 out.go:179] * Done! kubectl is now configured to use "newest-cni-246956" cluster and "default" namespace by default
	W1227 09:37:12.038861  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:14.039515  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 09:36:43 embed-certs-912564 crio[570]: time="2025-12-27T09:36:43.108845615Z" level=info msg="Started container" PID=1786 containerID=743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper id=b670ab30-ee73-45f6-8f97-a8d12c6b3403 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7c1e67845f4d6f3163de439bfb725dae1f0e93f501270cf7f5dc9027407a729
	Dec 27 09:36:43 embed-certs-912564 crio[570]: time="2025-12-27T09:36:43.14542476Z" level=info msg="Removing container: 7d2501ca0199b51ac747ff96467ef5de9e812a54c69349c92b62ad93f34bd323" id=56f40b43-7536-4e09-8051-cc6d694dff6d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:36:43 embed-certs-912564 crio[570]: time="2025-12-27T09:36:43.153956859Z" level=info msg="Removed container 7d2501ca0199b51ac747ff96467ef5de9e812a54c69349c92b62ad93f34bd323: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper" id=56f40b43-7536-4e09-8051-cc6d694dff6d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.186564061Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c2dbbca2-4df1-4596-ab96-6c234e0d864e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.187816462Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0f649522-0501-4bdb-bcc3-af032aec4e49 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.189453527Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0f2e7dc1-2af3-4100-8465-b9a997c0cd5a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.189591703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.203208493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.203855197Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e130f8ab8351e39574bb0e64dc834261ecf85529d013927ea43c7ab5b0bdb450/merged/etc/passwd: no such file or directory"
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.203893781Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e130f8ab8351e39574bb0e64dc834261ecf85529d013927ea43c7ab5b0bdb450/merged/etc/group: no such file or directory"
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.204195012Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.250131539Z" level=info msg="Created container 84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b: kube-system/storage-provisioner/storage-provisioner" id=0f2e7dc1-2af3-4100-8465-b9a997c0cd5a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.252390336Z" level=info msg="Starting container: 84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b" id=54670e2c-41c5-496b-ae3c-e7265fe8d95d name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.255733714Z" level=info msg="Started container" PID=1801 containerID=84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b description=kube-system/storage-provisioner/storage-provisioner id=54670e2c-41c5-496b-ae3c-e7265fe8d95d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f35ddd9bdfaef3f0f960a24c97745740cb977a9371189d095d7500e2901e1e8c
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.066247541Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=461a1e46-504d-4acf-b1ae-8578e72228db name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.067343251Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5881a276-678d-432f-8e3f-a420a44e2eaf name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.068357523Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper" id=768b595e-35cb-446f-b196-80bbd46f39b1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.06848401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.075539762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.076276394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.102598429Z" level=info msg="Created container 6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper" id=768b595e-35cb-446f-b196-80bbd46f39b1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.103175508Z" level=info msg="Starting container: 6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a" id=a2abcb8b-df4b-4460-892a-bed11dd4fdc3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.105362053Z" level=info msg="Started container" PID=1836 containerID=6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper id=a2abcb8b-df4b-4460-892a-bed11dd4fdc3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7c1e67845f4d6f3163de439bfb725dae1f0e93f501270cf7f5dc9027407a729
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.238356953Z" level=info msg="Removing container: 743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76" id=d5c4dc50-536a-4421-bdca-02caa749fa9a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.249865337Z" level=info msg="Removed container 743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper" id=d5c4dc50-536a-4421-bdca-02caa749fa9a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6e0f67ae51171       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   f7c1e67845f4d       dashboard-metrics-scraper-867fb5f87b-qwqqw   kubernetes-dashboard
	84a07cadb1d9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   f35ddd9bdfaef       storage-provisioner                          kube-system
	6014177c8a204       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   2fb901db9834a       kubernetes-dashboard-b84665fb8-jlksn         kubernetes-dashboard
	7a965f6d0d9f6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   3e1c76567f05c       busybox                                      default
	bbe4999435552       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           51 seconds ago      Running             coredns                     0                   df36a8e1d7e7d       coredns-7d764666f9-vm5hp                     kube-system
	e321884e2b076       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   f35ddd9bdfaef       storage-provisioner                          kube-system
	d3dff99ecfa4a       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           51 seconds ago      Running             kube-proxy                  0                   018f1c7841b53       kube-proxy-dv8ch                             kube-system
	7281d5c2323a0       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           51 seconds ago      Running             kindnet-cni                 0                   70da7d1207fdb       kindnet-bznfn                                kube-system
	5383d4cdce95a       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           53 seconds ago      Running             etcd                        0                   50ed092510ea8       etcd-embed-certs-912564                      kube-system
	4073c03ac98fe       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           53 seconds ago      Running             kube-scheduler              0                   7dc8a3aba75c6       kube-scheduler-embed-certs-912564            kube-system
	ba83fd494a8c5       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           53 seconds ago      Running             kube-controller-manager     0                   392c9150aaddb       kube-controller-manager-embed-certs-912564   kube-system
	663c76b88f425       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           53 seconds ago      Running             kube-apiserver              0                   382dedbca1de0       kube-apiserver-embed-certs-912564            kube-system
	
	
	==> coredns [bbe499943555262124a4668032443d02c3df7d492d67bc1fcde5ffe6d8bfbec7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55900 - 62424 "HINFO IN 5988268151926662984.6856518043187956369. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067996727s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-912564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-912564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=embed-certs-912564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:35:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-912564
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:37:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:37:05 +0000   Sat, 27 Dec 2025 09:35:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:37:05 +0000   Sat, 27 Dec 2025 09:35:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:37:05 +0000   Sat, 27 Dec 2025 09:35:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:37:05 +0000   Sat, 27 Dec 2025 09:35:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-912564
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                399800ce-3f0c-4a8a-a24c-ac96dc71a9c4
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-vm5hp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-912564                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-bznfn                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-912564             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-912564    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-dv8ch                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-912564             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-qwqqw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-jlksn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node embed-certs-912564 event: Registered Node embed-certs-912564 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node embed-certs-912564 event: Registered Node embed-certs-912564 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [5383d4cdce95af97f9b9e8e07db61c856f19c8db586c179d8ff736a43046829e] <==
	{"level":"info","ts":"2025-12-27T09:36:22.629243Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T09:36:22.629411Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T09:36:22.629469Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T09:36:22.627781Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:36:22.630113Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:36:22.627431Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"dfc97eb0aae75b33","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-12-27T09:36:22.630225Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:36:23.318082Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:23.318150Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:23.318224Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:23.318247Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:36:23.318271Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:23.318893Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:23.318919Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:36:23.318939Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:23.318951Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:23.319636Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:embed-certs-912564 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:36:23.319641Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:36:23.319671Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:36:23.319870Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:36:23.319912Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:36:23.320630Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:36:23.320721Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:36:23.323694Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:36:23.323881Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 09:37:16 up  1:19,  0 user,  load average: 3.03, 3.06, 2.33
	Linux embed-certs-912564 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7281d5c2323a07673639860e705dd2779b623d238dca9f09c1c16c035ce01a03] <==
	I1227 09:36:25.606206       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:36:25.606494       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1227 09:36:25.606674       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:36:25.606696       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:36:25.606719       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:36:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:36:25.805783       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:36:25.806128       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:36:25.806474       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:36:25.806685       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:36:26.207345       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:36:26.207372       1 metrics.go:72] Registering metrics
	I1227 09:36:26.207451       1 controller.go:711] "Syncing nftables rules"
	I1227 09:36:35.806446       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:36:35.806522       1 main.go:301] handling current node
	I1227 09:36:45.806954       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:36:45.806994       1 main.go:301] handling current node
	I1227 09:36:55.805992       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:36:55.806045       1 main.go:301] handling current node
	I1227 09:37:05.806487       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:37:05.806560       1 main.go:301] handling current node
	I1227 09:37:15.808466       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:37:15.808534       1 main.go:301] handling current node
	
	
	==> kube-apiserver [663c76b88f42532f7c763b6916bdc80252b590b27aa690c8fe09d547aca1eb6c] <==
	I1227 09:36:24.256920       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:24.256951       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 09:36:24.256954       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:36:24.257103       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 09:36:24.257200       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:36:24.257225       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 09:36:24.257238       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 09:36:24.257688       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 09:36:24.264580       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1227 09:36:24.265347       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 09:36:24.306610       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 09:36:24.313890       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:24.313907       1 policy_source.go:248] refreshing policies
	I1227 09:36:24.391760       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:36:24.523284       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:36:24.549322       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:36:24.565690       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:36:24.573046       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:36:24.580004       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:36:24.606716       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.67.86"}
	I1227 09:36:24.617302       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.126.34"}
	I1227 09:36:25.159542       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:36:27.919019       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:36:28.077654       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:36:28.118456       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ba83fd494a8c5a1bc7eb22555934e2b74494963aa284a3786fa73f76c60a9175] <==
	I1227 09:36:27.420974       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-912564"
	I1227 09:36:27.421032       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 09:36:27.420589       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.421277       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420582       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420590       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420615       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420569       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420580       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.421200       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420599       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420607       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.421543       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420569       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.422478       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.422508       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.422543       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.422585       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.423170       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.429411       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.433978       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:27.520863       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.520879       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:36:27.520883       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:36:27.534563       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [d3dff99ecfa4aa14f6bbd97b1487dfae36574c672747a8bf6c8790ecad04653a] <==
	I1227 09:36:25.460733       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:36:25.529309       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:25.629783       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:25.629849       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1227 09:36:25.629946       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:36:25.648462       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:36:25.648525       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:36:25.653766       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:36:25.654083       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:36:25.654098       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:36:25.655216       1 config.go:200] "Starting service config controller"
	I1227 09:36:25.655248       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:36:25.655304       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:36:25.655326       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:36:25.655341       1 config.go:309] "Starting node config controller"
	I1227 09:36:25.655581       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:36:25.655718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:36:25.655598       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:36:25.655746       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:36:25.755848       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:36:25.755847       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:36:25.755875       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4073c03ac98fe56856e504a1aa0d5a1748d26e6ce500dc31ad8e91ee49384cd6] <==
	I1227 09:36:22.807879       1 serving.go:386] Generated self-signed cert in-memory
	W1227 09:36:24.201239       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:36:24.201273       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:36:24.201285       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:36:24.201294       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:36:24.232930       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 09:36:24.233021       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:36:24.235510       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:36:24.235573       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:24.235586       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 09:36:24.235522       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 09:36:24.336532       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:36:36 embed-certs-912564 kubelet[737]: E1227 09:36:36.194228     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-912564" containerName="kube-scheduler"
	Dec 27 09:36:37 embed-certs-912564 kubelet[737]: E1227 09:36:37.129461     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-912564" containerName="kube-scheduler"
	Dec 27 09:36:40 embed-certs-912564 kubelet[737]: E1227 09:36:40.460398     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-912564" containerName="kube-apiserver"
	Dec 27 09:36:41 embed-certs-912564 kubelet[737]: E1227 09:36:41.137550     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-912564" containerName="kube-apiserver"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: E1227 09:36:43.065512     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" containerName="dashboard-metrics-scraper"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: I1227 09:36:43.065550     737 scope.go:122] "RemoveContainer" containerID="7d2501ca0199b51ac747ff96467ef5de9e812a54c69349c92b62ad93f34bd323"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: I1227 09:36:43.144087     737 scope.go:122] "RemoveContainer" containerID="7d2501ca0199b51ac747ff96467ef5de9e812a54c69349c92b62ad93f34bd323"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: E1227 09:36:43.144252     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" containerName="dashboard-metrics-scraper"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: I1227 09:36:43.144288     737 scope.go:122] "RemoveContainer" containerID="743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: E1227 09:36:43.144441     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qwqqw_kubernetes-dashboard(8704519b-843b-439f-8f79-4db6cfb2c73a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" podUID="8704519b-843b-439f-8f79-4db6cfb2c73a"
	Dec 27 09:36:44 embed-certs-912564 kubelet[737]: E1227 09:36:44.149065     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" containerName="dashboard-metrics-scraper"
	Dec 27 09:36:44 embed-certs-912564 kubelet[737]: I1227 09:36:44.149104     737 scope.go:122] "RemoveContainer" containerID="743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76"
	Dec 27 09:36:44 embed-certs-912564 kubelet[737]: E1227 09:36:44.149279     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qwqqw_kubernetes-dashboard(8704519b-843b-439f-8f79-4db6cfb2c73a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" podUID="8704519b-843b-439f-8f79-4db6cfb2c73a"
	Dec 27 09:36:56 embed-certs-912564 kubelet[737]: I1227 09:36:56.184950     737 scope.go:122] "RemoveContainer" containerID="e321884e2b0761fec8e2206091da271f0e89b9140101ad1d66d55d4f2d049606"
	Dec 27 09:36:57 embed-certs-912564 kubelet[737]: E1227 09:36:57.946295     737 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vm5hp" containerName="coredns"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: E1227 09:37:12.065706     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: I1227 09:37:12.065748     737 scope.go:122] "RemoveContainer" containerID="743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: I1227 09:37:12.236696     737 scope.go:122] "RemoveContainer" containerID="743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: E1227 09:37:12.236949     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: I1227 09:37:12.236982     737 scope.go:122] "RemoveContainer" containerID="6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: E1227 09:37:12.237176     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qwqqw_kubernetes-dashboard(8704519b-843b-439f-8f79-4db6cfb2c73a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" podUID="8704519b-843b-439f-8f79-4db6cfb2c73a"
	Dec 27 09:37:12 embed-certs-912564 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:37:13 embed-certs-912564 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:37:13 embed-certs-912564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:37:13 embed-certs-912564 systemd[1]: kubelet.service: Consumed 1.691s CPU time.
	
	
	==> kubernetes-dashboard [6014177c8a2040f77b17c42e8d6d1005c64bd49b4525e35b0d0a748ac43eeb31] <==
	2025/12/27 09:36:34 Starting overwatch
	2025/12/27 09:36:34 Using namespace: kubernetes-dashboard
	2025/12/27 09:36:34 Using in-cluster config to connect to apiserver
	2025/12/27 09:36:34 Using secret token for csrf signing
	2025/12/27 09:36:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 09:36:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 09:36:34 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 09:36:34 Generating JWE encryption key
	2025/12/27 09:36:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 09:36:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 09:36:34 Initializing JWE encryption key from synchronized object
	2025/12/27 09:36:34 Creating in-cluster Sidecar client
	2025/12/27 09:36:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 09:36:34 Serving insecurely on HTTP port: 9090
	2025/12/27 09:37:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b] <==
	I1227 09:36:56.272550       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:36:56.285677       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:36:56.285845       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 09:36:56.291121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:59.749910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:04.012536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:07.610674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:10.665351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:13.692850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:13.701196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:13.701435       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:37:13.701691       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-912564_a5da21ff-5449-4f5c-ab2d-073e7576eb10!
	I1227 09:37:13.701895       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82a4b092-3ec4-4d7c-8528-91199d1bbfdd", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-912564_a5da21ff-5449-4f5c-ab2d-073e7576eb10 became leader
	W1227 09:37:13.708002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:13.715506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:13.802737       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-912564_a5da21ff-5449-4f5c-ab2d-073e7576eb10!
	W1227 09:37:15.720491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:15.726937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e321884e2b0761fec8e2206091da271f0e89b9140101ad1d66d55d4f2d049606] <==
	I1227 09:36:25.431158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 09:36:55.433605       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-912564 -n embed-certs-912564
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-912564 -n embed-certs-912564: exit status 2 (339.96289ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-912564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-912564
helpers_test.go:244: (dbg) docker inspect embed-certs-912564:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8",
	        "Created": "2025-12-27T09:35:13.90835085Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 622541,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:36:15.973292467Z",
	            "FinishedAt": "2025-12-27T09:36:14.524926326Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/hosts",
	        "LogPath": "/var/lib/docker/containers/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8/d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8-json.log",
	        "Name": "/embed-certs-912564",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-912564:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-912564",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1131cb70c56d219673852b4cb35a6ce18fd724e4080483a56df71d480e5a9d8",
	                "LowerDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996/merged",
	                "UpperDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996/diff",
	                "WorkDir": "/var/lib/docker/overlay2/817468c022fdafe4e781f5a83226ccdb150ad3173bf064c178c11a690bbee996/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-912564",
	                "Source": "/var/lib/docker/volumes/embed-certs-912564/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-912564",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-912564",
	                "name.minikube.sigs.k8s.io": "embed-certs-912564",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d749012f8156d0aba2cde7e9a914c49b96e9059f4ffbfc3b583ff42b55f235b2",
	            "SandboxKey": "/var/run/docker/netns/d749012f8156",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-912564": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a8636b7bd1bb3a484b5591e16629c2b067fb4955cb3fcafbd69f576a7b19eb9b",
	                    "EndpointID": "5dc9accb47f9af8c718d36522128972b22a96cd365e1de1c03caccfdb94aa446",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "16:29:ba:06:27:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-912564",
	                        "d1131cb70c56"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-912564 -n embed-certs-912564
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-912564 -n embed-certs-912564: exit status 2 (325.514055ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-912564 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-912564 logs -n 25: (1.112587764s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-094398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ delete  │ -p stopped-upgrade-196124                                                                                                                                                                                                                     │ stopped-upgrade-196124       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-912564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-912564 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-912564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p no-preload-963457 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-497722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-497722 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ image   │ old-k8s-version-094398 image list --format=json                                                                                                                                                                                               │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ pause   │ -p old-k8s-version-094398 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p no-preload-963457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-497722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ image   │ embed-certs-912564 image list --format=json                                                                                                                                                                                                   │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p embed-certs-912564 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-246956 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:36:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:36:56.118033  631392 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:36:56.118317  631392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:56.118328  631392 out.go:374] Setting ErrFile to fd 2...
	I1227 09:36:56.118332  631392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:56.118604  631392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:36:56.119089  631392 out.go:368] Setting JSON to false
	I1227 09:36:56.120292  631392 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4760,"bootTime":1766823456,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:36:56.120351  631392 start.go:143] virtualization: kvm guest
	I1227 09:36:56.122005  631392 out.go:179] * [default-k8s-diff-port-497722] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:36:56.123168  631392 notify.go:221] Checking for updates...
	I1227 09:36:56.123180  631392 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:36:56.124207  631392 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:36:56.125641  631392 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:56.126923  631392 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:36:56.127972  631392 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:36:56.129126  631392 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:36:56.130855  631392 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:56.131603  631392 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:36:56.156894  631392 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:36:56.156995  631392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:56.237033  631392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-27 09:36:56.225326698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:56.237183  631392 docker.go:319] overlay module found
	I1227 09:36:56.238784  631392 out.go:179] * Using the docker driver based on existing profile
	I1227 09:36:56.239920  631392 start.go:309] selected driver: docker
	I1227 09:36:56.239938  631392 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:56.240055  631392 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:36:56.240864  631392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:56.311407  631392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 09:36:56.301965993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:56.311684  631392 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:56.311714  631392 cni.go:84] Creating CNI manager for ""
	I1227 09:36:56.311779  631392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:56.311860  631392 start.go:353] cluster config:
	{Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:56.313709  631392 out.go:179] * Starting "default-k8s-diff-port-497722" primary control-plane node in "default-k8s-diff-port-497722" cluster
	I1227 09:36:56.314728  631392 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:36:56.319525  631392 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:36:51.503987  630355 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:36:51.504270  630355 start.go:159] libmachine.API.Create for "newest-cni-246956" (driver="docker")
	I1227 09:36:51.504305  630355 client.go:173] LocalClient.Create starting
	I1227 09:36:51.504380  630355 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:36:51.504418  630355 main.go:144] libmachine: Decoding PEM data...
	I1227 09:36:51.504445  630355 main.go:144] libmachine: Parsing certificate...
	I1227 09:36:51.504530  630355 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:36:51.504560  630355 main.go:144] libmachine: Decoding PEM data...
	I1227 09:36:51.504578  630355 main.go:144] libmachine: Parsing certificate...
	I1227 09:36:51.505013  630355 cli_runner.go:164] Run: docker network inspect newest-cni-246956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:36:51.521118  630355 cli_runner.go:211] docker network inspect newest-cni-246956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:36:51.521200  630355 network_create.go:284] running [docker network inspect newest-cni-246956] to gather additional debugging logs...
	I1227 09:36:51.521226  630355 cli_runner.go:164] Run: docker network inspect newest-cni-246956
	W1227 09:36:51.537389  630355 cli_runner.go:211] docker network inspect newest-cni-246956 returned with exit code 1
	I1227 09:36:51.537414  630355 network_create.go:287] error running [docker network inspect newest-cni-246956]: docker network inspect newest-cni-246956: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-246956 not found
	I1227 09:36:51.537439  630355 network_create.go:289] output of [docker network inspect newest-cni-246956]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-246956 not found
	
	** /stderr **
	I1227 09:36:51.537527  630355 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:51.553978  630355 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
	I1227 09:36:51.554821  630355 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-21a699476be6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:e8:d9:95:e6:36} reservation:<nil>}
	I1227 09:36:51.555324  630355 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e97c5356905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:d9:6b:42:f5:e3} reservation:<nil>}
	I1227 09:36:51.556124  630355 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e560a0}
	I1227 09:36:51.556148  630355 network_create.go:124] attempt to create docker network newest-cni-246956 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:36:51.556202  630355 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-246956 newest-cni-246956
	I1227 09:36:51.601256  630355 network_create.go:108] docker network newest-cni-246956 192.168.76.0/24 created
	I1227 09:36:51.601292  630355 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-246956" container
	I1227 09:36:51.601382  630355 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:36:51.618040  630355 cli_runner.go:164] Run: docker volume create newest-cni-246956 --label name.minikube.sigs.k8s.io=newest-cni-246956 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:36:51.634779  630355 oci.go:103] Successfully created a docker volume newest-cni-246956
	I1227 09:36:51.634906  630355 cli_runner.go:164] Run: docker run --rm --name newest-cni-246956-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-246956 --entrypoint /usr/bin/test -v newest-cni-246956:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:36:51.985470  630355 oci.go:107] Successfully prepared a docker volume newest-cni-246956
	I1227 09:36:51.985539  630355 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:51.985556  630355 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:36:51.985607  630355 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-246956:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:36:55.783686  630355 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-246956:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.798014883s)
	I1227 09:36:55.783722  630355 kic.go:203] duration metric: took 3.798163626s to extract preloaded images to volume ...
	W1227 09:36:55.783877  630355 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:36:55.783911  630355 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:36:55.783950  630355 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:36:55.845043  630355 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-246956 --name newest-cni-246956 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-246956 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-246956 --network newest-cni-246956 --ip 192.168.76.2 --volume newest-cni-246956:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:36:56.141349  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Running}}
	I1227 09:36:56.161926  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:36:56.186737  630355 cli_runner.go:164] Run: docker exec newest-cni-246956 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:36:56.251448  630355 oci.go:144] the created container "newest-cni-246956" has a running status.
	I1227 09:36:56.251484  630355 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa...
	I1227 09:36:56.320494  631392 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:56.320535  631392 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:36:56.320544  631392 cache.go:65] Caching tarball of preloaded images
	I1227 09:36:56.320642  631392 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:36:56.320635  631392 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:36:56.320657  631392 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:36:56.320859  631392 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/config.json ...
	I1227 09:36:56.345922  631392 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:36:56.345947  631392 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:36:56.345968  631392 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:36:56.346010  631392 start.go:360] acquireMachinesLock for default-k8s-diff-port-497722: {Name:mk952cc47ec82ed9310014186e6e4270fbb3e58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:36:56.346079  631392 start.go:364] duration metric: took 44.824µs to acquireMachinesLock for "default-k8s-diff-port-497722"
	I1227 09:36:56.346102  631392 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:36:56.346112  631392 fix.go:54] fixHost starting: 
	I1227 09:36:56.346414  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:36:56.365133  631392 fix.go:112] recreateIfNeeded on default-k8s-diff-port-497722: state=Stopped err=<nil>
	W1227 09:36:56.365221  631392 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:36:55.892570  629532 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:36:55.892596  629532 machine.go:97] duration metric: took 5.581836028s to provisionDockerMachine
	I1227 09:36:55.892610  629532 start.go:293] postStartSetup for "no-preload-963457" (driver="docker")
	I1227 09:36:55.892621  629532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:36:55.892671  629532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:36:55.892708  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:55.914280  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.011927  629532 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:36:56.015740  629532 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:36:56.015765  629532 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:36:56.015778  629532 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:36:56.015885  629532 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:36:56.015989  629532 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:36:56.016101  629532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:36:56.024943  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:56.046067  629532 start.go:296] duration metric: took 153.444971ms for postStartSetup
	I1227 09:36:56.046157  629532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:56.046226  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:56.065042  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.156611  629532 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:36:56.161967  629532 fix.go:56] duration metric: took 6.41436493s for fixHost
	I1227 09:36:56.161992  629532 start.go:83] releasing machines lock for "no-preload-963457", held for 6.414414383s
	I1227 09:36:56.162052  629532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-963457
	I1227 09:36:56.188154  629532 ssh_runner.go:195] Run: cat /version.json
	I1227 09:36:56.188215  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:56.188464  629532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:36:56.188765  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:56.223568  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.225022  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.390845  629532 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:56.399342  629532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:36:56.448678  629532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:36:56.454437  629532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:36:56.454505  629532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:36:56.464966  629532 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:36:56.464988  629532 start.go:496] detecting cgroup driver to use...
	I1227 09:36:56.465019  629532 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:36:56.465068  629532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:36:56.498904  629532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:36:56.522095  629532 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:36:56.522154  629532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:36:56.554225  629532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:36:56.572425  629532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:36:56.679708  629532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:36:56.789709  629532 docker.go:234] disabling docker service ...
	I1227 09:36:56.789778  629532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:36:56.806829  629532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:36:56.820513  629532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:36:56.923496  629532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:36:57.030200  629532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:36:57.043639  629532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:36:57.058019  629532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:36:57.058082  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.067538  629532 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:36:57.067598  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.077318  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.085917  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.094193  629532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:36:57.101639  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.110030  629532 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.117710  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.126967  629532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:36:57.133883  629532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:36:57.141132  629532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:57.224153  629532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:36:57.360012  629532 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:36:57.360088  629532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:36:57.364319  629532 start.go:574] Will wait 60s for crictl version
	I1227 09:36:57.364375  629532 ssh_runner.go:195] Run: which crictl
	I1227 09:36:57.367811  629532 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:36:57.391321  629532 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:36:57.391394  629532 ssh_runner.go:195] Run: crio --version
	I1227 09:36:57.421171  629532 ssh_runner.go:195] Run: crio --version
	I1227 09:36:57.452635  629532 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:36:57.453610  629532 cli_runner.go:164] Run: docker network inspect no-preload-963457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:57.471362  629532 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 09:36:57.475352  629532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:57.485498  629532 kubeadm.go:884] updating cluster {Name:no-preload-963457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-963457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:36:57.485606  629532 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:57.485644  629532 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:57.516604  629532 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:57.516626  629532 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:36:57.516634  629532 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 09:36:57.516744  629532 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-963457 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-963457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:36:57.516854  629532 ssh_runner.go:195] Run: crio config
	I1227 09:36:57.561627  629532 cni.go:84] Creating CNI manager for ""
	I1227 09:36:57.561649  629532 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:57.561667  629532 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:36:57.561699  629532 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-963457 NodeName:no-preload-963457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:36:57.561892  629532 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-963457"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:36:57.561977  629532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:36:57.570489  629532 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:36:57.570544  629532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:36:57.579475  629532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 09:36:57.592242  629532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:36:57.604718  629532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1227 09:36:57.617292  629532 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:36:57.621167  629532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:57.631391  629532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:57.717314  629532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:57.743061  629532 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457 for IP: 192.168.85.2
	I1227 09:36:57.743088  629532 certs.go:195] generating shared ca certs ...
	I1227 09:36:57.743111  629532 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:57.743279  629532 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:36:57.743330  629532 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:36:57.743343  629532 certs.go:257] generating profile certs ...
	I1227 09:36:57.743479  629532 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/client.key
	I1227 09:36:57.743563  629532 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key.7eac886d
	I1227 09:36:57.743621  629532 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.key
	I1227 09:36:57.743760  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:36:57.743831  629532 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:36:57.743845  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:36:57.743879  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:36:57.743916  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:36:57.743950  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:36:57.744006  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:57.744846  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:36:57.763692  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:36:57.782669  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:36:57.803981  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:36:57.828529  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 09:36:57.848835  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:36:57.866897  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:36:57.883743  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:36:57.900146  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:36:57.916751  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:36:57.934086  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:36:57.952366  629532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:36:57.966505  629532 ssh_runner.go:195] Run: openssl version
	I1227 09:36:57.975156  629532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:57.984628  629532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:36:57.993907  629532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:57.998878  629532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:57.998931  629532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:58.039453  629532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:36:58.046838  629532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.053745  629532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:36:58.060929  629532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.064401  629532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.064454  629532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.100242  629532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:36:58.107476  629532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.114303  629532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:36:58.122260  629532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.125672  629532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.125718  629532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.160416  629532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:36:58.167633  629532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:36:58.171634  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:36:58.211068  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:36:58.251576  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:36:58.300366  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:36:58.353707  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:36:58.409756  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:36:58.450831  629532 kubeadm.go:401] StartCluster: {Name:no-preload-963457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-963457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:58.450953  629532 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:36:58.451037  629532 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:36:58.486940  629532 cri.go:96] found id: "3e2d56e4ec07d845c73c075df87007dd294313f8f25d93e2e062eae21343461c"
	I1227 09:36:58.487007  629532 cri.go:96] found id: "0edde0ef003566fce4556ceb0a3d7cbc04d2cd6685f5afd803595f3ababb1338"
	I1227 09:36:58.487016  629532 cri.go:96] found id: "03b54f84cfa7e0b506b3122cd323cd0db22c3c4310cfedd9769eeb770ec9a426"
	I1227 09:36:58.487021  629532 cri.go:96] found id: "716a7952d1fa9945a526436df75297cbf883fb889ba62f53b3ae1e94790bfeaa"
	I1227 09:36:58.487067  629532 cri.go:96] found id: ""
	I1227 09:36:58.487122  629532 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:36:58.499274  629532 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:58Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:36:58.499327  629532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:36:58.507652  629532 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:36:58.507673  629532 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:36:58.507717  629532 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:36:58.515112  629532 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:36:58.515843  629532 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-963457" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:58.516271  629532 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-963457" cluster setting kubeconfig missing "no-preload-963457" context setting]
	I1227 09:36:58.516950  629532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:58.518808  629532 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:36:58.526405  629532 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 09:36:58.526432  629532 kubeadm.go:602] duration metric: took 18.753157ms to restartPrimaryControlPlane
	I1227 09:36:58.526441  629532 kubeadm.go:403] duration metric: took 75.626448ms to StartCluster
	I1227 09:36:58.526457  629532 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:58.526521  629532 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:58.527618  629532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:58.527872  629532 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:36:58.527997  629532 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:36:58.528107  629532 addons.go:70] Setting storage-provisioner=true in profile "no-preload-963457"
	I1227 09:36:58.528134  629532 addons.go:239] Setting addon storage-provisioner=true in "no-preload-963457"
	I1227 09:36:58.528133  629532 config.go:182] Loaded profile config "no-preload-963457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	W1227 09:36:58.528143  629532 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:36:58.528150  629532 addons.go:70] Setting dashboard=true in profile "no-preload-963457"
	I1227 09:36:58.528157  629532 addons.go:70] Setting default-storageclass=true in profile "no-preload-963457"
	I1227 09:36:58.528178  629532 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-963457"
	I1227 09:36:58.528184  629532 addons.go:239] Setting addon dashboard=true in "no-preload-963457"
	W1227 09:36:58.528193  629532 addons.go:248] addon dashboard should already be in state true
	I1227 09:36:58.528196  629532 host.go:66] Checking if "no-preload-963457" exists ...
	I1227 09:36:58.528219  629532 host.go:66] Checking if "no-preload-963457" exists ...
	I1227 09:36:58.528519  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.528685  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.528697  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.529707  629532 out.go:179] * Verifying Kubernetes components...
	I1227 09:36:58.530836  629532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:58.557409  629532 addons.go:239] Setting addon default-storageclass=true in "no-preload-963457"
	W1227 09:36:58.557440  629532 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:36:58.557472  629532 host.go:66] Checking if "no-preload-963457" exists ...
	I1227 09:36:58.558777  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.558787  629532 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 09:36:58.559492  629532 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:36:58.562458  629532 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:36:56.489729  630355 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:36:56.526840  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:36:56.552039  630355 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:36:56.552068  630355 kic_runner.go:114] Args: [docker exec --privileged newest-cni-246956 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:36:56.617818  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:36:56.638019  630355 machine.go:94] provisionDockerMachine start ...
	I1227 09:36:56.638109  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:56.659481  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.659711  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:56.659723  630355 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:36:56.792984  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-246956
	
	I1227 09:36:56.793019  630355 ubuntu.go:182] provisioning hostname "newest-cni-246956"
	I1227 09:36:56.793088  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:56.815143  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.815483  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:56.815506  630355 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-246956 && echo "newest-cni-246956" | sudo tee /etc/hostname
	I1227 09:36:56.968737  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-246956
	
	I1227 09:36:56.968893  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:56.992239  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.992470  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:56.992489  630355 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-246956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-246956/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-246956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:36:57.122046  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:36:57.122079  630355 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:36:57.122127  630355 ubuntu.go:190] setting up certificates
	I1227 09:36:57.122138  630355 provision.go:84] configureAuth start
	I1227 09:36:57.122216  630355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:36:57.142307  630355 provision.go:143] copyHostCerts
	I1227 09:36:57.142360  630355 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:36:57.142370  630355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:36:57.142423  630355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:36:57.142512  630355 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:36:57.142521  630355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:36:57.142546  630355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:36:57.142616  630355 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:36:57.142623  630355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:36:57.142648  630355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:36:57.142706  630355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.newest-cni-246956 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-246956]
	I1227 09:36:57.212931  630355 provision.go:177] copyRemoteCerts
	I1227 09:36:57.212987  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:36:57.213033  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.230924  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.325527  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:36:57.343993  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 09:36:57.361059  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:36:57.378461  630355 provision.go:87] duration metric: took 256.298706ms to configureAuth
	I1227 09:36:57.378484  630355 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:36:57.378677  630355 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:57.378826  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.397931  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:57.398243  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:57.398266  630355 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:36:57.667097  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:36:57.667129  630355 machine.go:97] duration metric: took 1.02908483s to provisionDockerMachine
	I1227 09:36:57.667142  630355 client.go:176] duration metric: took 6.162825502s to LocalClient.Create
	I1227 09:36:57.667182  630355 start.go:167] duration metric: took 6.162896704s to libmachine.API.Create "newest-cni-246956"
	I1227 09:36:57.667192  630355 start.go:293] postStartSetup for "newest-cni-246956" (driver="docker")
	I1227 09:36:57.667204  630355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:36:57.667353  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:36:57.667440  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.688032  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.781111  630355 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:36:57.785094  630355 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:36:57.785137  630355 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:36:57.785152  630355 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:36:57.785207  630355 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:36:57.785305  630355 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:36:57.785438  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:36:57.793222  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:57.817083  630355 start.go:296] duration metric: took 149.877387ms for postStartSetup
	I1227 09:36:57.817500  630355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:36:57.842720  630355 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/config.json ...
	I1227 09:36:57.842997  630355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:57.843039  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.861694  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.950266  630355 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:36:57.955115  630355 start.go:128] duration metric: took 6.45260447s to createHost
	I1227 09:36:57.955139  630355 start.go:83] releasing machines lock for "newest-cni-246956", held for 6.452757416s
	I1227 09:36:57.955207  630355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:36:57.976745  630355 ssh_runner.go:195] Run: cat /version.json
	I1227 09:36:57.976812  630355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:36:57.976893  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.976938  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.996141  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.997139  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:58.139611  630355 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:58.145675  630355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:36:58.181051  630355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:36:58.185484  630355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:36:58.185559  630355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:36:58.210594  630355 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 09:36:58.210618  630355 start.go:496] detecting cgroup driver to use...
	I1227 09:36:58.210653  630355 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:36:58.210713  630355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:36:58.227384  630355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:36:58.238872  630355 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:36:58.238929  630355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:36:58.260938  630355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:36:58.283057  630355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:36:58.414499  630355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:36:58.538586  630355 docker.go:234] disabling docker service ...
	I1227 09:36:58.538673  630355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:36:58.586101  630355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:36:58.605375  630355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:36:58.705180  630355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:36:58.819410  630355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:36:58.832661  630355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:36:58.850388  630355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:36:58.850452  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.862728  630355 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:36:58.862856  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.873915  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.883825  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.894396  630355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:36:58.903928  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.915832  630355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.931065  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.941511  630355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:36:58.950306  630355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:36:58.957971  630355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:59.055197  630355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:36:59.200763  630355 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:36:59.200870  630355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:36:59.205226  630355 start.go:574] Will wait 60s for crictl version
	I1227 09:36:59.205294  630355 ssh_runner.go:195] Run: which crictl
	I1227 09:36:59.209253  630355 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:36:59.235124  630355 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:36:59.235211  630355 ssh_runner.go:195] Run: crio --version
	I1227 09:36:59.266439  630355 ssh_runner.go:195] Run: crio --version
	I1227 09:36:59.304407  630355 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:36:59.305344  630355 cli_runner.go:164] Run: docker network inspect newest-cni-246956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:59.325345  630355 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:36:59.329616  630355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:59.343894  630355 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 09:36:58.562497  629532 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:58.562512  629532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:36:58.562565  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:58.563435  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:36:58.563457  629532 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:36:58.563515  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:58.586081  629532 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:58.586106  629532 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:36:58.586165  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:58.597636  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:58.600166  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:58.615813  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:58.683845  629532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:58.696212  629532 node_ready.go:35] waiting up to 6m0s for node "no-preload-963457" to be "Ready" ...
	I1227 09:36:58.716769  629532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:58.717072  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:36:58.717091  629532 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:36:58.722474  629532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:58.732902  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:36:58.732921  629532 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:36:58.756653  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:36:58.756700  629532 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:36:58.775246  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:36:58.775273  629532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:36:58.791178  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:36:58.791220  629532 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:36:58.806784  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:36:58.806865  629532 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:36:58.821301  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:36:58.821323  629532 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:36:58.835038  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:36:58.835059  629532 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:36:58.851360  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:36:58.851383  629532 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:36:58.866009  629532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1227 09:36:56.752824  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	I1227 09:36:58.252524  622335 pod_ready.go:94] pod "coredns-7d764666f9-vm5hp" is "Ready"
	I1227 09:36:58.252556  622335 pod_ready.go:86] duration metric: took 32.507379919s for pod "coredns-7d764666f9-vm5hp" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.255267  622335 pod_ready.go:83] waiting for pod "etcd-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.260388  622335 pod_ready.go:94] pod "etcd-embed-certs-912564" is "Ready"
	I1227 09:36:58.260428  622335 pod_ready.go:86] duration metric: took 5.133413ms for pod "etcd-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.263042  622335 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.269170  622335 pod_ready.go:94] pod "kube-apiserver-embed-certs-912564" is "Ready"
	I1227 09:36:58.269195  622335 pod_ready.go:86] duration metric: took 6.12908ms for pod "kube-apiserver-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.271334  622335 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.451056  622335 pod_ready.go:94] pod "kube-controller-manager-embed-certs-912564" is "Ready"
	I1227 09:36:58.451082  622335 pod_ready.go:86] duration metric: took 179.728256ms for pod "kube-controller-manager-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.650121  622335 pod_ready.go:83] waiting for pod "kube-proxy-dv8ch" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.050144  622335 pod_ready.go:94] pod "kube-proxy-dv8ch" is "Ready"
	I1227 09:36:59.050170  622335 pod_ready.go:86] duration metric: took 400.019705ms for pod "kube-proxy-dv8ch" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.249302  622335 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.649616  622335 pod_ready.go:94] pod "kube-scheduler-embed-certs-912564" is "Ready"
	I1227 09:36:59.649652  622335 pod_ready.go:86] duration metric: took 400.318884ms for pod "kube-scheduler-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.649667  622335 pod_ready.go:40] duration metric: took 33.907675392s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:59.704219  622335 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:36:59.705864  622335 out.go:179] * Done! kubectl is now configured to use "embed-certs-912564" cluster and "default" namespace by default
	I1227 09:36:59.788339  629532 node_ready.go:49] node "no-preload-963457" is "Ready"
	I1227 09:36:59.788374  629532 node_ready.go:38] duration metric: took 1.092117451s for node "no-preload-963457" to be "Ready" ...
	I1227 09:36:59.788394  629532 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:59.788452  629532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:37:00.485897  629532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.769088985s)
	I1227 09:37:00.485927  629532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.763421495s)
	I1227 09:37:00.486068  629532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.620019639s)
	I1227 09:37:00.486186  629532 api_server.go:72] duration metric: took 1.958263237s to wait for apiserver process to appear ...
	I1227 09:37:00.486206  629532 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:37:00.486232  629532 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:37:00.488270  629532 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-963457 addons enable metrics-server
	
	I1227 09:37:00.491676  629532 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:37:00.491700  629532 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:37:00.493717  629532 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:36:59.344975  630355 kubeadm.go:884] updating cluster {Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:36:59.345101  630355 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:59.345149  630355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:59.384759  630355 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:59.384782  630355 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:36:59.384849  630355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:59.410055  630355 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:59.410078  630355 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:36:59.410088  630355 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 09:36:59.410204  630355 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-246956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:36:59.410294  630355 ssh_runner.go:195] Run: crio config
	I1227 09:36:59.456322  630355 cni.go:84] Creating CNI manager for ""
	I1227 09:36:59.456350  630355 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:59.456368  630355 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 09:36:59.456397  630355 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-246956 NodeName:newest-cni-246956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:36:59.456523  630355 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-246956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:36:59.456584  630355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:36:59.466669  630355 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:36:59.466742  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:36:59.475652  630355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 09:36:59.488517  630355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:36:59.502920  630355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1227 09:36:59.515008  630355 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:36:59.518524  630355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:59.528038  630355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:59.624983  630355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:59.660589  630355 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956 for IP: 192.168.76.2
	I1227 09:36:59.660613  630355 certs.go:195] generating shared ca certs ...
	I1227 09:36:59.660633  630355 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:59.660905  630355 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:36:59.661015  630355 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:36:59.661035  630355 certs.go:257] generating profile certs ...
	I1227 09:36:59.661115  630355 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.key
	I1227 09:36:59.661143  630355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.crt with IP's: []
	I1227 09:36:59.788963  630355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.crt ...
	I1227 09:36:59.789056  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.crt: {Name:mke160e795be5819fc64a4cfdc99d30cbaf7ac78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:59.789341  630355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.key ...
	I1227 09:36:59.789401  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.key: {Name:mked296971a2b1adfd827807ea9bcfac542a6198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:59.789603  630355 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc
	I1227 09:36:59.789628  630355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:37:00.007987  630355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc ...
	I1227 09:37:00.008015  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc: {Name:mkf106bea43ddce33073679b38a2435ae123204d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.008208  630355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc ...
	I1227 09:37:00.008231  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc: {Name:mk7a7a619839a917a7bc295106055593f103712f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.008348  630355 certs.go:382] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt
	I1227 09:37:00.008443  630355 certs.go:386] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key
	I1227 09:37:00.008507  630355 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key
	I1227 09:37:00.008521  630355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt with IP's: []
	I1227 09:37:00.065399  630355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt ...
	I1227 09:37:00.065432  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt: {Name:mk53170e13da64d8c60c92c2979a2d1722947a2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.065641  630355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key ...
	I1227 09:37:00.065669  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key: {Name:mk035fe76dc91ae603b8c29c1b707b2402dd30b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.065955  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:00.066015  630355 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:00.066033  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:00.066073  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:00.066115  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:00.066153  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:00.066212  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:00.066935  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:00.090536  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:00.114426  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:00.139622  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:00.168025  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 09:37:00.199153  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:00.230695  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:00.258683  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:37:00.286125  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:00.307110  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:00.329272  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:00.353282  630355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:00.370493  630355 ssh_runner.go:195] Run: openssl version
	I1227 09:37:00.378974  630355 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.386715  630355 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:00.395766  630355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.400298  630355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.400359  630355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.438423  630355 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:00.446991  630355 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3771712.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:00.455583  630355 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.464747  630355 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:00.475422  630355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.480200  630355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.480263  630355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.528725  630355 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:00.536448  630355 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:37:00.543666  630355 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.550688  630355 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:00.557833  630355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.561504  630355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.561559  630355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.597670  630355 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:00.605630  630355 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/377171.pem /etc/ssl/certs/51391683.0
	I1227 09:37:00.613503  630355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:00.617430  630355 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:37:00.617489  630355 kubeadm.go:401] StartCluster: {Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:00.617574  630355 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:00.617633  630355 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:00.648921  630355 cri.go:96] found id: ""
	I1227 09:37:00.648987  630355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:00.657821  630355 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:37:00.665951  630355 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:37:00.666012  630355 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:37:00.674118  630355 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:37:00.674134  630355 kubeadm.go:158] found existing configuration files:
	
	I1227 09:37:00.674176  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:37:00.681708  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:37:00.681768  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:37:00.689291  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:37:00.697145  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:37:00.697203  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:37:00.705396  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:37:00.714137  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:37:00.714200  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:37:00.723361  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:37:00.732948  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:37:00.733004  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:37:00.741856  630355 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:37:00.788203  630355 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:37:00.788305  630355 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:37:00.868108  630355 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:37:00.868200  630355 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 09:37:00.868288  630355 kubeadm.go:319] OS: Linux
	I1227 09:37:00.868365  630355 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:37:00.868440  630355 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:37:00.868658  630355 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:37:00.868734  630355 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:37:00.868816  630355 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:37:00.868893  630355 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:37:00.868964  630355 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:37:00.869016  630355 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 09:37:00.945222  630355 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:37:00.945395  630355 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:37:00.945534  630355 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:37:00.953623  630355 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:36:56.366926  631392 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-497722" ...
	I1227 09:36:56.367007  631392 cli_runner.go:164] Run: docker start default-k8s-diff-port-497722
	I1227 09:36:56.696501  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:36:56.724853  631392 kic.go:430] container "default-k8s-diff-port-497722" state is running.
	I1227 09:36:56.725355  631392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:36:56.748319  631392 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/config.json ...
	I1227 09:36:56.748736  631392 machine.go:94] provisionDockerMachine start ...
	I1227 09:36:56.748860  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:36:56.769479  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.769885  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:36:56.769906  631392 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:36:56.770811  631392 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56300->127.0.0.1:33468: read: connection reset by peer
	I1227 09:36:59.952704  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-497722
	
	I1227 09:36:59.952731  631392 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-497722"
	I1227 09:36:59.952803  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:36:59.977726  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:59.978041  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:36:59.978072  631392 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-497722 && echo "default-k8s-diff-port-497722" | sudo tee /etc/hostname
	I1227 09:37:00.132462  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-497722
	
	I1227 09:37:00.132551  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:00.162410  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:00.162741  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:37:00.162763  631392 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-497722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-497722/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-497722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:37:00.320889  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:37:00.321032  631392 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:37:00.321059  631392 ubuntu.go:190] setting up certificates
	I1227 09:37:00.321086  631392 provision.go:84] configureAuth start
	I1227 09:37:00.321152  631392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:37:00.345021  631392 provision.go:143] copyHostCerts
	I1227 09:37:00.345085  631392 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:37:00.345108  631392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:37:00.345193  631392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:37:00.345342  631392 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:37:00.345358  631392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:37:00.345408  631392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:37:00.345527  631392 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:37:00.345542  631392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:37:00.345633  631392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:37:00.345740  631392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-497722 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-497722 localhost minikube]
	I1227 09:37:00.442570  631392 provision.go:177] copyRemoteCerts
	I1227 09:37:00.442624  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:37:00.442658  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:00.466058  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:00.564061  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:37:00.581525  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 09:37:00.598744  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:37:00.617306  631392 provision.go:87] duration metric: took 296.195744ms to configureAuth
	I1227 09:37:00.617333  631392 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:37:00.617559  631392 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:00.617677  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:00.639063  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:00.639350  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:37:00.639374  631392 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:37:00.992976  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:37:00.993002  631392 machine.go:97] duration metric: took 4.244244423s to provisionDockerMachine
	I1227 09:37:00.993015  631392 start.go:293] postStartSetup for "default-k8s-diff-port-497722" (driver="docker")
	I1227 09:37:00.993027  631392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:37:00.993100  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:37:00.993147  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.014724  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.108368  631392 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:37:01.111898  631392 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:37:01.111923  631392 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:37:01.111934  631392 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:37:01.111974  631392 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:37:01.112059  631392 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:37:01.112148  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:37:00.957070  630355 out.go:252]   - Generating certificates and keys ...
	I1227 09:37:00.957171  630355 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:37:00.957258  630355 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:37:01.046705  630355 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:37:01.170369  630355 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:37:01.237679  630355 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:37:01.119477  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:01.136366  631392 start.go:296] duration metric: took 143.321557ms for postStartSetup
	I1227 09:37:01.136451  631392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:37:01.136488  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.156365  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.245668  631392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:37:01.250596  631392 fix.go:56] duration metric: took 4.904478049s for fixHost
	I1227 09:37:01.250623  631392 start.go:83] releasing machines lock for "default-k8s-diff-port-497722", held for 4.904529799s
	I1227 09:37:01.250702  631392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:37:01.270598  631392 ssh_runner.go:195] Run: cat /version.json
	I1227 09:37:01.270653  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.270708  631392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:37:01.270815  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.289307  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.291072  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.376675  631392 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:01.432189  631392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:37:01.470826  631392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:37:01.475338  631392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:37:01.475421  631392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:37:01.483037  631392 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:37:01.483059  631392 start.go:496] detecting cgroup driver to use...
	I1227 09:37:01.483091  631392 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:37:01.483133  631392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:37:01.498478  631392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:37:01.511191  631392 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:37:01.511242  631392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:37:01.526955  631392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:37:01.540540  631392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:37:01.622776  631392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:37:01.703107  631392 docker.go:234] disabling docker service ...
	I1227 09:37:01.703196  631392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:37:01.717309  631392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:37:01.729344  631392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:37:01.840554  631392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:37:01.935549  631392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:37:01.947578  631392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:37:01.961324  631392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:37:01.961377  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.970001  631392 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:37:01.970098  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.979093  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.989166  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.998100  631392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:37:02.007297  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:02.017288  631392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:02.025859  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:02.035574  631392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:37:02.043526  631392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:37:02.050579  631392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:02.135042  631392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:37:02.277022  631392 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:37:02.277093  631392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:37:02.280991  631392 start.go:574] Will wait 60s for crictl version
	I1227 09:37:02.281049  631392 ssh_runner.go:195] Run: which crictl
	I1227 09:37:02.284712  631392 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:37:02.310427  631392 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:37:02.310505  631392 ssh_runner.go:195] Run: crio --version
	I1227 09:37:02.339457  631392 ssh_runner.go:195] Run: crio --version
	I1227 09:37:02.369305  631392 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:37:01.372389  630355 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:37:01.413174  630355 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:37:01.413305  630355 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-246956] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:37:01.527664  630355 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:37:01.527805  630355 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-246956] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:37:01.615200  630355 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:37:01.840816  630355 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:37:02.017322  630355 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:37:02.017649  630355 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:37:02.264512  630355 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:37:02.566832  630355 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:37:02.683014  630355 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:37:02.757540  630355 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:37:02.820765  630355 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:37:02.821428  630355 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:37:02.825397  630355 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:37:02.370401  631392 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-497722 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:02.389071  631392 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 09:37:02.393241  631392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:02.403448  631392 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:37:02.403574  631392 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:02.403630  631392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:02.438551  631392 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:02.438590  631392 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:37:02.438667  631392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:02.465209  631392 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:02.465235  631392 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:37:02.465245  631392 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.35.0 crio true true} ...
	I1227 09:37:02.465366  631392 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-497722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:37:02.465461  631392 ssh_runner.go:195] Run: crio config
	I1227 09:37:02.513257  631392 cni.go:84] Creating CNI manager for ""
	I1227 09:37:02.513278  631392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:02.513294  631392 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:37:02.513317  631392 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-497722 NodeName:default-k8s-diff-port-497722 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:37:02.513444  631392 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-497722"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:37:02.513505  631392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:37:02.521923  631392 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:37:02.521985  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:37:02.529554  631392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1227 09:37:02.543277  631392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:37:02.555132  631392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1227 09:37:02.567822  631392 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:37:02.571320  631392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:02.580644  631392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:02.664822  631392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:02.693404  631392 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722 for IP: 192.168.103.2
	I1227 09:37:02.693441  631392 certs.go:195] generating shared ca certs ...
	I1227 09:37:02.693462  631392 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:02.693637  631392 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:37:02.693699  631392 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:37:02.693714  631392 certs.go:257] generating profile certs ...
	I1227 09:37:02.693848  631392 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/client.key
	I1227 09:37:02.693949  631392 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/apiserver.key.70f960dd
	I1227 09:37:02.694002  631392 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/proxy-client.key
	I1227 09:37:02.694163  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:02.694205  631392 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:02.694217  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:02.694258  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:02.694290  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:02.694323  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:02.694385  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:02.695781  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:02.717703  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:02.740004  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:02.760168  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:02.785274  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 09:37:02.808608  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:02.826033  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:02.851489  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:37:02.871940  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:02.895485  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:02.913955  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:02.930694  631392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:02.945635  631392 ssh_runner.go:195] Run: openssl version
	I1227 09:37:02.953715  631392 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:02.961830  631392 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:02.969519  631392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:02.973176  631392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:02.973234  631392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:03.016770  631392 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:03.024510  631392 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.031699  631392 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:03.039836  631392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.043468  631392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.043517  631392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.079383  631392 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:03.086880  631392 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.094441  631392 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:03.102480  631392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.105974  631392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.106034  631392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.140053  631392 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:03.147121  631392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:03.150759  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:37:03.185667  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:37:03.239684  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:37:03.282935  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:37:03.332349  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:37:03.387948  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:37:03.433087  631392 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:03.433200  631392 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:03.433280  631392 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:03.466372  631392 cri.go:96] found id: "f7a05d26251e8a9f4091116e036a79a8654a182636649e96285fc252c0530199"
	I1227 09:37:03.466401  631392 cri.go:96] found id: "05ea82c1bf330e30a293e6d7ea1c01b1766ebce50eeaf8868bbcf622fd71d8e8"
	I1227 09:37:03.466408  631392 cri.go:96] found id: "0c8a5140613f466c8107ca22c4400874507b9d96db9fc14bb7f9ecf967957942"
	I1227 09:37:03.466413  631392 cri.go:96] found id: "0f35c56ae0629feaaf5192c69a9a652f101e67591c4c93f500daa6ceb2a62911"
	I1227 09:37:03.466417  631392 cri.go:96] found id: ""
	I1227 09:37:03.466465  631392 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:37:03.479438  631392 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:03Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:37:03.479541  631392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:03.489354  631392 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:37:03.489372  631392 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:37:03.489431  631392 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:37:03.497550  631392 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:37:03.498709  631392 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-497722" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:03.499508  631392 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-497722" cluster setting kubeconfig missing "default-k8s-diff-port-497722" context setting]
	I1227 09:37:03.500656  631392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:03.503000  631392 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:37:03.510898  631392 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1227 09:37:03.510931  631392 kubeadm.go:602] duration metric: took 21.551771ms to restartPrimaryControlPlane
	I1227 09:37:03.510940  631392 kubeadm.go:403] duration metric: took 77.869263ms to StartCluster
	I1227 09:37:03.510958  631392 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:03.511015  631392 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:03.512421  631392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:03.512668  631392 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:03.512733  631392 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:37:03.512865  631392 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-497722"
	I1227 09:37:03.512886  631392 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:03.512903  631392 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-497722"
	I1227 09:37:03.512924  631392 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-497722"
	I1227 09:37:03.512892  631392 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-497722"
	W1227 09:37:03.512933  631392 addons.go:248] addon dashboard should already be in state true
	W1227 09:37:03.512937  631392 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:37:03.512934  631392 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-497722"
	I1227 09:37:03.512963  631392 host.go:66] Checking if "default-k8s-diff-port-497722" exists ...
	I1227 09:37:03.512991  631392 host.go:66] Checking if "default-k8s-diff-port-497722" exists ...
	I1227 09:37:03.512985  631392 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-497722"
	I1227 09:37:03.513292  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.513452  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.513483  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.515099  631392 out.go:179] * Verifying Kubernetes components...
	I1227 09:37:03.516218  631392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:03.545006  631392 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-497722"
	W1227 09:37:03.545032  631392 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:37:03.545065  631392 host.go:66] Checking if "default-k8s-diff-port-497722" exists ...
	I1227 09:37:03.545116  631392 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:37:03.545116  631392 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 09:37:03.545523  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.546781  631392 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:03.546811  631392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:37:03.546873  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:03.547729  631392 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:37:00.496152  629532 addons.go:530] duration metric: took 1.96813915s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:37:00.986939  629532 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:37:00.991714  629532 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 09:37:00.992754  629532 api_server.go:141] control plane version: v1.35.0
	I1227 09:37:00.992785  629532 api_server.go:131] duration metric: took 506.570525ms to wait for apiserver health ...
	I1227 09:37:00.992822  629532 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:37:00.997469  629532 system_pods.go:59] 8 kube-system pods found
	I1227 09:37:00.999467  629532 system_pods.go:61] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:00.999484  629532 system_pods.go:61] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:00.999494  629532 system_pods.go:61] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:37:00.999504  629532 system_pods.go:61] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:00.999520  629532 system_pods.go:61] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:00.999526  629532 system_pods.go:61] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:37:00.999537  629532 system_pods.go:61] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:00.999542  629532 system_pods.go:61] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Running
	I1227 09:37:00.999552  629532 system_pods.go:74] duration metric: took 6.721122ms to wait for pod list to return data ...
	I1227 09:37:00.999566  629532 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:37:01.002442  629532 default_sa.go:45] found service account: "default"
	I1227 09:37:01.002467  629532 default_sa.go:55] duration metric: took 2.893913ms for default service account to be created ...
	I1227 09:37:01.002476  629532 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:37:01.005574  629532 system_pods.go:86] 8 kube-system pods found
	I1227 09:37:01.005606  629532 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:01.005617  629532 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:01.005636  629532 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:37:01.005647  629532 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:01.005659  629532 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:01.005669  629532 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:37:01.005678  629532 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:01.005686  629532 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Running
	I1227 09:37:01.005694  629532 system_pods.go:126] duration metric: took 3.211017ms to wait for k8s-apps to be running ...
	I1227 09:37:01.005703  629532 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:37:01.005745  629532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:01.019329  629532 system_svc.go:56] duration metric: took 13.615658ms WaitForService to wait for kubelet
	I1227 09:37:01.019357  629532 kubeadm.go:587] duration metric: took 2.491446215s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:37:01.019406  629532 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:37:01.022469  629532 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:37:01.022494  629532 node_conditions.go:123] node cpu capacity is 8
	I1227 09:37:01.022508  629532 node_conditions.go:105] duration metric: took 3.09307ms to run NodePressure ...
	I1227 09:37:01.022519  629532 start.go:242] waiting for startup goroutines ...
	I1227 09:37:01.022526  629532 start.go:247] waiting for cluster config update ...
	I1227 09:37:01.022539  629532 start.go:256] writing updated cluster config ...
	I1227 09:37:01.022825  629532 ssh_runner.go:195] Run: rm -f paused
	I1227 09:37:01.026415  629532 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:37:01.032367  629532 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:37:03.037599  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	I1227 09:37:03.548864  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:37:03.548881  631392 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:37:03.548946  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:03.580601  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:03.585910  631392 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:03.585995  631392 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:37:03.586083  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:03.587524  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:03.611367  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:03.693388  631392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:03.735280  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:37:03.735311  631392 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:37:03.739715  631392 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-497722" to be "Ready" ...
	I1227 09:37:03.745369  631392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:03.756381  631392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:03.772524  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:37:03.772552  631392 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:37:03.796801  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:37:03.796827  631392 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:37:03.830005  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:37:03.830031  631392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:37:03.859155  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:37:03.859261  631392 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:37:03.882955  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:37:03.883028  631392 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:37:03.903109  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:37:03.903135  631392 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:37:03.920352  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:37:03.920389  631392 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:37:03.938041  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:37:03.938067  631392 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:37:03.952702  631392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:37:05.081740  631392 node_ready.go:49] node "default-k8s-diff-port-497722" is "Ready"
	I1227 09:37:05.081784  631392 node_ready.go:38] duration metric: took 1.342025698s for node "default-k8s-diff-port-497722" to be "Ready" ...
	I1227 09:37:05.081817  631392 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:37:05.081879  631392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:37:05.822291  631392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.076880271s)
	I1227 09:37:05.822373  631392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.065940093s)
	I1227 09:37:05.822447  631392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.869714406s)
	I1227 09:37:05.822496  631392 api_server.go:72] duration metric: took 2.309795438s to wait for apiserver process to appear ...
	I1227 09:37:05.822521  631392 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:37:05.822603  631392 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1227 09:37:05.823594  631392 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-497722 addons enable metrics-server
	
	I1227 09:37:05.828585  631392 api_server.go:325] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:37:05.828612  631392 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:37:05.833008  631392 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:37:05.834729  631392 addons.go:530] duration metric: took 2.322002327s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:37:02.827049  630355 out.go:252]   - Booting up control plane ...
	I1227 09:37:02.827173  630355 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:37:02.827659  630355 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:37:02.828658  630355 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:37:02.846003  630355 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:37:02.846144  630355 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:37:02.856948  630355 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:37:02.857081  630355 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:37:02.857138  630355 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:37:02.965432  630355 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:37:02.965600  630355 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:37:03.468145  630355 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.896581ms
	I1227 09:37:03.472302  630355 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 09:37:03.472420  630355 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1227 09:37:03.472580  630355 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 09:37:03.472737  630355 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 09:37:04.484316  630355 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.011409443s
	I1227 09:37:05.492077  630355 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.019567088s
	I1227 09:37:06.973882  630355 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501519063s
	I1227 09:37:06.989241  630355 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 09:37:06.997155  630355 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 09:37:07.005486  630355 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 09:37:07.005698  630355 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-246956 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 09:37:07.013098  630355 kubeadm.go:319] [bootstrap-token] Using token: kw0ne7.e0ofxhotwu7t62i6
	I1227 09:37:07.014365  630355 out.go:252]   - Configuring RBAC rules ...
	I1227 09:37:07.014525  630355 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 09:37:07.017519  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 09:37:07.022777  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 09:37:07.025010  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 09:37:07.028326  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 09:37:07.030560  630355 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 09:37:07.380628  630355 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 09:37:07.798585  630355 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 09:37:08.380637  630355 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 09:37:08.381938  630355 kubeadm.go:319] 
	I1227 09:37:08.382044  630355 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 09:37:08.382060  630355 kubeadm.go:319] 
	I1227 09:37:08.382154  630355 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 09:37:08.382182  630355 kubeadm.go:319] 
	I1227 09:37:08.382220  630355 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 09:37:08.382289  630355 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 09:37:08.382354  630355 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 09:37:08.382366  630355 kubeadm.go:319] 
	I1227 09:37:08.382438  630355 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 09:37:08.382449  630355 kubeadm.go:319] 
	I1227 09:37:08.382507  630355 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 09:37:08.382522  630355 kubeadm.go:319] 
	I1227 09:37:08.382996  630355 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 09:37:08.383196  630355 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 09:37:08.383307  630355 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 09:37:08.383318  630355 kubeadm.go:319] 
	I1227 09:37:08.383460  630355 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 09:37:08.383582  630355 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 09:37:08.383593  630355 kubeadm.go:319] 
	I1227 09:37:08.383722  630355 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kw0ne7.e0ofxhotwu7t62i6 \
	I1227 09:37:08.383907  630355 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 \
	I1227 09:37:08.383937  630355 kubeadm.go:319] 	--control-plane 
	I1227 09:37:08.383942  630355 kubeadm.go:319] 
	I1227 09:37:08.384084  630355 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 09:37:08.384101  630355 kubeadm.go:319] 
	I1227 09:37:08.384257  630355 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kw0ne7.e0ofxhotwu7t62i6 \
	I1227 09:37:08.384453  630355 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 
	I1227 09:37:08.387345  630355 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 09:37:08.387526  630355 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:37:08.387558  630355 cni.go:84] Creating CNI manager for ""
	I1227 09:37:08.387571  630355 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:08.389255  630355 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1227 09:37:05.046412  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:07.538510  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:09.541062  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	I1227 09:37:06.322979  631392 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1227 09:37:06.328435  631392 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1227 09:37:06.329947  631392 api_server.go:141] control plane version: v1.35.0
	I1227 09:37:06.329981  631392 api_server.go:131] duration metric: took 507.399256ms to wait for apiserver health ...
	I1227 09:37:06.329993  631392 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:37:06.333774  631392 system_pods.go:59] 8 kube-system pods found
	I1227 09:37:06.333837  631392 system_pods.go:61] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:06.333861  631392 system_pods.go:61] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:06.333877  631392 system_pods.go:61] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:37:06.333888  631392 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:06.333905  631392 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:06.333918  631392 system_pods.go:61] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:37:06.333929  631392 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:06.333937  631392 system_pods.go:61] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:37:06.333948  631392 system_pods.go:74] duration metric: took 3.947114ms to wait for pod list to return data ...
	I1227 09:37:06.333961  631392 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:37:06.336565  631392 default_sa.go:45] found service account: "default"
	I1227 09:37:06.336587  631392 default_sa.go:55] duration metric: took 2.617601ms for default service account to be created ...
	I1227 09:37:06.336597  631392 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:37:06.339330  631392 system_pods.go:86] 8 kube-system pods found
	I1227 09:37:06.339360  631392 system_pods.go:89] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:06.339372  631392 system_pods.go:89] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:06.339386  631392 system_pods.go:89] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:37:06.339401  631392 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:06.339414  631392 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:06.339422  631392 system_pods.go:89] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:37:06.339436  631392 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:06.339450  631392 system_pods.go:89] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:37:06.339460  631392 system_pods.go:126] duration metric: took 2.854974ms to wait for k8s-apps to be running ...
	I1227 09:37:06.339469  631392 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:37:06.339521  631392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:06.357773  631392 system_svc.go:56] duration metric: took 18.294659ms WaitForService to wait for kubelet
	I1227 09:37:06.357818  631392 kubeadm.go:587] duration metric: took 2.845118615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:37:06.357845  631392 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:37:06.360952  631392 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:37:06.360981  631392 node_conditions.go:123] node cpu capacity is 8
	I1227 09:37:06.361002  631392 node_conditions.go:105] duration metric: took 3.142212ms to run NodePressure ...
	I1227 09:37:06.361018  631392 start.go:242] waiting for startup goroutines ...
	I1227 09:37:06.361047  631392 start.go:247] waiting for cluster config update ...
	I1227 09:37:06.361060  631392 start.go:256] writing updated cluster config ...
	I1227 09:37:06.361365  631392 ssh_runner.go:195] Run: rm -f paused
	I1227 09:37:06.366245  631392 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:37:06.370043  631392 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:37:08.375981  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	W1227 09:37:10.379441  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	I1227 09:37:08.390254  630355 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 09:37:08.395438  630355 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 09:37:08.395460  630355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 09:37:08.411692  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 09:37:08.701041  630355 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 09:37:08.701214  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:08.701332  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-246956 minikube.k8s.io/updated_at=2025_12_27T09_37_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=newest-cni-246956 minikube.k8s.io/primary=true
	I1227 09:37:08.716553  630355 ops.go:34] apiserver oom_adj: -16
	I1227 09:37:08.804282  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:09.304609  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:09.804478  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:10.305050  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:10.804402  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:11.304767  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:11.804674  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:12.305069  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:12.805104  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:13.304712  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:13.404981  630355 kubeadm.go:1114] duration metric: took 4.703817064s to wait for elevateKubeSystemPrivileges
	I1227 09:37:13.405019  630355 kubeadm.go:403] duration metric: took 12.787533089s to StartCluster
	I1227 09:37:13.405045  630355 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:13.405126  630355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:13.407805  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:13.408105  630355 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:13.408219  630355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 09:37:13.408237  630355 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:37:13.408318  630355 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-246956"
	I1227 09:37:13.408346  630355 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-246956"
	I1227 09:37:13.408385  630355 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:13.408436  630355 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:13.408377  630355 addons.go:70] Setting default-storageclass=true in profile "newest-cni-246956"
	I1227 09:37:13.408494  630355 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-246956"
	I1227 09:37:13.408941  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:13.408985  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:13.410120  630355 out.go:179] * Verifying Kubernetes components...
	I1227 09:37:13.411386  630355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:13.436409  630355 addons.go:239] Setting addon default-storageclass=true in "newest-cni-246956"
	I1227 09:37:13.436462  630355 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:13.436486  630355 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:37:13.436995  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:13.437815  630355 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:13.437836  630355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:37:13.437890  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:13.470222  630355 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:13.470333  630355 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:37:13.470460  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:13.470880  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:13.501499  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:13.522610  630355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 09:37:13.595670  630355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:13.601932  630355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:13.631849  630355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:13.821310  630355 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 09:37:14.024585  630355 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:37:14.024669  630355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:37:14.038954  630355 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 09:37:14.039897  630355 addons.go:530] duration metric: took 631.665184ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 09:37:14.044768  630355 api_server.go:72] duration metric: took 636.623759ms to wait for apiserver process to appear ...
	I1227 09:37:14.044806  630355 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:37:14.044828  630355 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:37:14.051156  630355 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 09:37:14.052113  630355 api_server.go:141] control plane version: v1.35.0
	I1227 09:37:14.052147  630355 api_server.go:131] duration metric: took 7.331844ms to wait for apiserver health ...
	I1227 09:37:14.052157  630355 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:37:14.055482  630355 system_pods.go:59] 8 kube-system pods found
	I1227 09:37:14.055525  630355 system_pods.go:61] "coredns-7d764666f9-kqzph" [cd4faccb-5994-46cb-a83b-d554df2fb8f2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 09:37:14.055546  630355 system_pods.go:61] "etcd-newest-cni-246956" [26721526-906a-4949-a50f-92ea210b80be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:14.055559  630355 system_pods.go:61] "kindnet-lmtxw" [e2185b04-5cba-4c54-86e0-9c2515f95074] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:37:14.055572  630355 system_pods.go:61] "kube-apiserver-newest-cni-246956" [7e3043fd-edc4-4182-8659-eba54f67a2d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:14.055582  630355 system_pods.go:61] "kube-controller-manager-newest-cni-246956" [7a30adc9-ce06-4908-8f0b-ed3da78f6394] Running
	I1227 09:37:14.055591  630355 system_pods.go:61] "kube-proxy-65ltj" [a1e5773a-e15f-405b-bca5-62a52d6e83a2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:37:14.055603  630355 system_pods.go:61] "kube-scheduler-newest-cni-246956" [e515cbde-415b-4a69-b0be-a4c87c86858e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:14.055620  630355 system_pods.go:61] "storage-provisioner" [0735bc86-6017-4c08-8562-4a36fe686929] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 09:37:14.055629  630355 system_pods.go:74] duration metric: took 3.463288ms to wait for pod list to return data ...
	I1227 09:37:14.055639  630355 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:37:14.058131  630355 default_sa.go:45] found service account: "default"
	I1227 09:37:14.058152  630355 default_sa.go:55] duration metric: took 2.506015ms for default service account to be created ...
	I1227 09:37:14.058166  630355 kubeadm.go:587] duration metric: took 650.02674ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 09:37:14.058190  630355 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:37:14.060558  630355 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:37:14.060580  630355 node_conditions.go:123] node cpu capacity is 8
	I1227 09:37:14.060600  630355 node_conditions.go:105] duration metric: took 2.400745ms to run NodePressure ...
	I1227 09:37:14.060615  630355 start.go:242] waiting for startup goroutines ...
	I1227 09:37:14.325652  630355 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-246956" context rescaled to 1 replicas
	I1227 09:37:14.325757  630355 start.go:247] waiting for cluster config update ...
	I1227 09:37:14.325781  630355 start.go:256] writing updated cluster config ...
	I1227 09:37:14.326218  630355 ssh_runner.go:195] Run: rm -f paused
	I1227 09:37:14.386984  630355 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:37:14.389355  630355 out.go:179] * Done! kubectl is now configured to use "newest-cni-246956" cluster and "default" namespace by default
	W1227 09:37:12.038861  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:14.039515  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:12.876859  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	W1227 09:37:14.877407  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 09:36:43 embed-certs-912564 crio[570]: time="2025-12-27T09:36:43.108845615Z" level=info msg="Started container" PID=1786 containerID=743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper id=b670ab30-ee73-45f6-8f97-a8d12c6b3403 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7c1e67845f4d6f3163de439bfb725dae1f0e93f501270cf7f5dc9027407a729
	Dec 27 09:36:43 embed-certs-912564 crio[570]: time="2025-12-27T09:36:43.14542476Z" level=info msg="Removing container: 7d2501ca0199b51ac747ff96467ef5de9e812a54c69349c92b62ad93f34bd323" id=56f40b43-7536-4e09-8051-cc6d694dff6d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:36:43 embed-certs-912564 crio[570]: time="2025-12-27T09:36:43.153956859Z" level=info msg="Removed container 7d2501ca0199b51ac747ff96467ef5de9e812a54c69349c92b62ad93f34bd323: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper" id=56f40b43-7536-4e09-8051-cc6d694dff6d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.186564061Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c2dbbca2-4df1-4596-ab96-6c234e0d864e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.187816462Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0f649522-0501-4bdb-bcc3-af032aec4e49 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.189453527Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0f2e7dc1-2af3-4100-8465-b9a997c0cd5a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.189591703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.203208493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.203855197Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e130f8ab8351e39574bb0e64dc834261ecf85529d013927ea43c7ab5b0bdb450/merged/etc/passwd: no such file or directory"
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.203893781Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e130f8ab8351e39574bb0e64dc834261ecf85529d013927ea43c7ab5b0bdb450/merged/etc/group: no such file or directory"
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.204195012Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.250131539Z" level=info msg="Created container 84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b: kube-system/storage-provisioner/storage-provisioner" id=0f2e7dc1-2af3-4100-8465-b9a997c0cd5a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.252390336Z" level=info msg="Starting container: 84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b" id=54670e2c-41c5-496b-ae3c-e7265fe8d95d name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:36:56 embed-certs-912564 crio[570]: time="2025-12-27T09:36:56.255733714Z" level=info msg="Started container" PID=1801 containerID=84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b description=kube-system/storage-provisioner/storage-provisioner id=54670e2c-41c5-496b-ae3c-e7265fe8d95d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f35ddd9bdfaef3f0f960a24c97745740cb977a9371189d095d7500e2901e1e8c
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.066247541Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=461a1e46-504d-4acf-b1ae-8578e72228db name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.067343251Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5881a276-678d-432f-8e3f-a420a44e2eaf name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.068357523Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper" id=768b595e-35cb-446f-b196-80bbd46f39b1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.06848401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.075539762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.076276394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.102598429Z" level=info msg="Created container 6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper" id=768b595e-35cb-446f-b196-80bbd46f39b1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.103175508Z" level=info msg="Starting container: 6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a" id=a2abcb8b-df4b-4460-892a-bed11dd4fdc3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.105362053Z" level=info msg="Started container" PID=1836 containerID=6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper id=a2abcb8b-df4b-4460-892a-bed11dd4fdc3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7c1e67845f4d6f3163de439bfb725dae1f0e93f501270cf7f5dc9027407a729
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.238356953Z" level=info msg="Removing container: 743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76" id=d5c4dc50-536a-4421-bdca-02caa749fa9a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:12 embed-certs-912564 crio[570]: time="2025-12-27T09:37:12.249865337Z" level=info msg="Removed container 743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw/dashboard-metrics-scraper" id=d5c4dc50-536a-4421-bdca-02caa749fa9a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6e0f67ae51171       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   f7c1e67845f4d       dashboard-metrics-scraper-867fb5f87b-qwqqw   kubernetes-dashboard
	84a07cadb1d9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   f35ddd9bdfaef       storage-provisioner                          kube-system
	6014177c8a204       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   2fb901db9834a       kubernetes-dashboard-b84665fb8-jlksn         kubernetes-dashboard
	7a965f6d0d9f6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   3e1c76567f05c       busybox                                      default
	bbe4999435552       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           53 seconds ago      Running             coredns                     0                   df36a8e1d7e7d       coredns-7d764666f9-vm5hp                     kube-system
	e321884e2b076       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   f35ddd9bdfaef       storage-provisioner                          kube-system
	d3dff99ecfa4a       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           53 seconds ago      Running             kube-proxy                  0                   018f1c7841b53       kube-proxy-dv8ch                             kube-system
	7281d5c2323a0       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           53 seconds ago      Running             kindnet-cni                 0                   70da7d1207fdb       kindnet-bznfn                                kube-system
	5383d4cdce95a       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           56 seconds ago      Running             etcd                        0                   50ed092510ea8       etcd-embed-certs-912564                      kube-system
	4073c03ac98fe       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           56 seconds ago      Running             kube-scheduler              0                   7dc8a3aba75c6       kube-scheduler-embed-certs-912564            kube-system
	ba83fd494a8c5       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           56 seconds ago      Running             kube-controller-manager     0                   392c9150aaddb       kube-controller-manager-embed-certs-912564   kube-system
	663c76b88f425       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           56 seconds ago      Running             kube-apiserver              0                   382dedbca1de0       kube-apiserver-embed-certs-912564            kube-system
	
	
	==> coredns [bbe499943555262124a4668032443d02c3df7d492d67bc1fcde5ffe6d8bfbec7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55900 - 62424 "HINFO IN 5988268151926662984.6856518043187956369. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067996727s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-912564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-912564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=embed-certs-912564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:35:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-912564
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:37:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:37:05 +0000   Sat, 27 Dec 2025 09:35:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:37:05 +0000   Sat, 27 Dec 2025 09:35:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:37:05 +0000   Sat, 27 Dec 2025 09:35:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:37:05 +0000   Sat, 27 Dec 2025 09:35:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-912564
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                399800ce-3f0c-4a8a-a24c-ac96dc71a9c4
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-vm5hp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-912564                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-bznfn                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-912564             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-912564    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-dv8ch                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-912564             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-qwqqw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-jlksn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node embed-certs-912564 event: Registered Node embed-certs-912564 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node embed-certs-912564 event: Registered Node embed-certs-912564 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [5383d4cdce95af97f9b9e8e07db61c856f19c8db586c179d8ff736a43046829e] <==
	{"level":"info","ts":"2025-12-27T09:36:22.629243Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T09:36:22.629411Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T09:36:22.629469Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T09:36:22.627781Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:36:22.630113Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:36:22.627431Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"dfc97eb0aae75b33","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-12-27T09:36:22.630225Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:36:23.318082Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:23.318150Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:23.318224Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:23.318247Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:36:23.318271Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:23.318893Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:23.318919Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:36:23.318939Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:23.318951Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:23.319636Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:embed-certs-912564 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:36:23.319641Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:36:23.319671Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:36:23.319870Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:36:23.319912Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:36:23.320630Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:36:23.320721Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:36:23.323694Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:36:23.323881Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 09:37:19 up  1:19,  0 user,  load average: 3.10, 3.08, 2.34
	Linux embed-certs-912564 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7281d5c2323a07673639860e705dd2779b623d238dca9f09c1c16c035ce01a03] <==
	I1227 09:36:25.606206       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:36:25.606494       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1227 09:36:25.606674       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:36:25.606696       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:36:25.606719       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:36:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:36:25.805783       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:36:25.806128       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:36:25.806474       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:36:25.806685       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:36:26.207345       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:36:26.207372       1 metrics.go:72] Registering metrics
	I1227 09:36:26.207451       1 controller.go:711] "Syncing nftables rules"
	I1227 09:36:35.806446       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:36:35.806522       1 main.go:301] handling current node
	I1227 09:36:45.806954       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:36:45.806994       1 main.go:301] handling current node
	I1227 09:36:55.805992       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:36:55.806045       1 main.go:301] handling current node
	I1227 09:37:05.806487       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:37:05.806560       1 main.go:301] handling current node
	I1227 09:37:15.808466       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 09:37:15.808534       1 main.go:301] handling current node
	
	
	==> kube-apiserver [663c76b88f42532f7c763b6916bdc80252b590b27aa690c8fe09d547aca1eb6c] <==
	I1227 09:36:24.256920       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:24.256951       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 09:36:24.256954       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:36:24.257103       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 09:36:24.257200       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:36:24.257225       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 09:36:24.257238       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 09:36:24.257688       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 09:36:24.264580       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1227 09:36:24.265347       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 09:36:24.306610       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 09:36:24.313890       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:24.313907       1 policy_source.go:248] refreshing policies
	I1227 09:36:24.391760       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:36:24.523284       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:36:24.549322       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:36:24.565690       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:36:24.573046       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:36:24.580004       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:36:24.606716       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.67.86"}
	I1227 09:36:24.617302       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.126.34"}
	I1227 09:36:25.159542       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:36:27.919019       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:36:28.077654       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:36:28.118456       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ba83fd494a8c5a1bc7eb22555934e2b74494963aa284a3786fa73f76c60a9175] <==
	I1227 09:36:27.420974       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-912564"
	I1227 09:36:27.421032       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 09:36:27.420589       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.421277       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420582       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420590       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420615       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420569       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420580       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.421200       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420599       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420607       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.421543       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.420569       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.422478       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.422508       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.422543       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.422585       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.423170       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.429411       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.433978       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:27.520863       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:27.520879       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:36:27.520883       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:36:27.534563       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [d3dff99ecfa4aa14f6bbd97b1487dfae36574c672747a8bf6c8790ecad04653a] <==
	I1227 09:36:25.460733       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:36:25.529309       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:25.629783       1 shared_informer.go:377] "Caches are synced"
	I1227 09:36:25.629849       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1227 09:36:25.629946       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:36:25.648462       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:36:25.648525       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:36:25.653766       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:36:25.654083       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:36:25.654098       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:36:25.655216       1 config.go:200] "Starting service config controller"
	I1227 09:36:25.655248       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:36:25.655304       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:36:25.655326       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:36:25.655341       1 config.go:309] "Starting node config controller"
	I1227 09:36:25.655581       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:36:25.655718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:36:25.655598       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:36:25.655746       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:36:25.755848       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:36:25.755847       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:36:25.755875       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4073c03ac98fe56856e504a1aa0d5a1748d26e6ce500dc31ad8e91ee49384cd6] <==
	I1227 09:36:22.807879       1 serving.go:386] Generated self-signed cert in-memory
	W1227 09:36:24.201239       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:36:24.201273       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:36:24.201285       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:36:24.201294       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:36:24.232930       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 09:36:24.233021       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:36:24.235510       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:36:24.235573       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:24.235586       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 09:36:24.235522       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 09:36:24.336532       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:36:36 embed-certs-912564 kubelet[737]: E1227 09:36:36.194228     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-912564" containerName="kube-scheduler"
	Dec 27 09:36:37 embed-certs-912564 kubelet[737]: E1227 09:36:37.129461     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-912564" containerName="kube-scheduler"
	Dec 27 09:36:40 embed-certs-912564 kubelet[737]: E1227 09:36:40.460398     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-912564" containerName="kube-apiserver"
	Dec 27 09:36:41 embed-certs-912564 kubelet[737]: E1227 09:36:41.137550     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-912564" containerName="kube-apiserver"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: E1227 09:36:43.065512     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" containerName="dashboard-metrics-scraper"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: I1227 09:36:43.065550     737 scope.go:122] "RemoveContainer" containerID="7d2501ca0199b51ac747ff96467ef5de9e812a54c69349c92b62ad93f34bd323"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: I1227 09:36:43.144087     737 scope.go:122] "RemoveContainer" containerID="7d2501ca0199b51ac747ff96467ef5de9e812a54c69349c92b62ad93f34bd323"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: E1227 09:36:43.144252     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" containerName="dashboard-metrics-scraper"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: I1227 09:36:43.144288     737 scope.go:122] "RemoveContainer" containerID="743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76"
	Dec 27 09:36:43 embed-certs-912564 kubelet[737]: E1227 09:36:43.144441     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qwqqw_kubernetes-dashboard(8704519b-843b-439f-8f79-4db6cfb2c73a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" podUID="8704519b-843b-439f-8f79-4db6cfb2c73a"
	Dec 27 09:36:44 embed-certs-912564 kubelet[737]: E1227 09:36:44.149065     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" containerName="dashboard-metrics-scraper"
	Dec 27 09:36:44 embed-certs-912564 kubelet[737]: I1227 09:36:44.149104     737 scope.go:122] "RemoveContainer" containerID="743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76"
	Dec 27 09:36:44 embed-certs-912564 kubelet[737]: E1227 09:36:44.149279     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qwqqw_kubernetes-dashboard(8704519b-843b-439f-8f79-4db6cfb2c73a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" podUID="8704519b-843b-439f-8f79-4db6cfb2c73a"
	Dec 27 09:36:56 embed-certs-912564 kubelet[737]: I1227 09:36:56.184950     737 scope.go:122] "RemoveContainer" containerID="e321884e2b0761fec8e2206091da271f0e89b9140101ad1d66d55d4f2d049606"
	Dec 27 09:36:57 embed-certs-912564 kubelet[737]: E1227 09:36:57.946295     737 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vm5hp" containerName="coredns"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: E1227 09:37:12.065706     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: I1227 09:37:12.065748     737 scope.go:122] "RemoveContainer" containerID="743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: I1227 09:37:12.236696     737 scope.go:122] "RemoveContainer" containerID="743dc93133ddf9acbbb349b6b566d4ac9c36bcf100eb12449773583fa5e5ab76"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: E1227 09:37:12.236949     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: I1227 09:37:12.236982     737 scope.go:122] "RemoveContainer" containerID="6e0f67ae51171b32108d2707ab568fce0ed3dd12409f5f0aaedd5e6e725a0f8a"
	Dec 27 09:37:12 embed-certs-912564 kubelet[737]: E1227 09:37:12.237176     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qwqqw_kubernetes-dashboard(8704519b-843b-439f-8f79-4db6cfb2c73a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qwqqw" podUID="8704519b-843b-439f-8f79-4db6cfb2c73a"
	Dec 27 09:37:12 embed-certs-912564 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:37:13 embed-certs-912564 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:37:13 embed-certs-912564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:37:13 embed-certs-912564 systemd[1]: kubelet.service: Consumed 1.691s CPU time.
	
	
	==> kubernetes-dashboard [6014177c8a2040f77b17c42e8d6d1005c64bd49b4525e35b0d0a748ac43eeb31] <==
	2025/12/27 09:36:34 Starting overwatch
	2025/12/27 09:36:34 Using namespace: kubernetes-dashboard
	2025/12/27 09:36:34 Using in-cluster config to connect to apiserver
	2025/12/27 09:36:34 Using secret token for csrf signing
	2025/12/27 09:36:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 09:36:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 09:36:34 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 09:36:34 Generating JWE encryption key
	2025/12/27 09:36:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 09:36:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 09:36:34 Initializing JWE encryption key from synchronized object
	2025/12/27 09:36:34 Creating in-cluster Sidecar client
	2025/12/27 09:36:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 09:36:34 Serving insecurely on HTTP port: 9090
	2025/12/27 09:37:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [84a07cadb1d9c9aa419b2726218cd527981d9603da3c51e418fea57634d1077b] <==
	I1227 09:36:56.272550       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:36:56.285677       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:36:56.285845       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 09:36:56.291121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:36:59.749910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:04.012536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:07.610674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:10.665351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:13.692850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:13.701196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:13.701435       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:37:13.701691       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-912564_a5da21ff-5449-4f5c-ab2d-073e7576eb10!
	I1227 09:37:13.701895       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82a4b092-3ec4-4d7c-8528-91199d1bbfdd", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-912564_a5da21ff-5449-4f5c-ab2d-073e7576eb10 became leader
	W1227 09:37:13.708002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:13.715506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:13.802737       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-912564_a5da21ff-5449-4f5c-ab2d-073e7576eb10!
	W1227 09:37:15.720491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:15.726937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:17.730655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:17.734432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e321884e2b0761fec8e2206091da271f0e89b9140101ad1d66d55d4f2d049606] <==
	I1227 09:36:25.431158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 09:36:55.433605       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-912564 -n embed-certs-912564
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-912564 -n embed-certs-912564: exit status 2 (326.227018ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-912564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (294.739657ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-246956
helpers_test.go:244: (dbg) docker inspect newest-cni-246956:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b",
	        "Created": "2025-12-27T09:36:55.867553755Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 631289,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:36:55.913961779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/hostname",
	        "HostsPath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/hosts",
	        "LogPath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b-json.log",
	        "Name": "/newest-cni-246956",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-246956:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-246956",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b",
	                "LowerDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-246956",
	                "Source": "/var/lib/docker/volumes/newest-cni-246956/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-246956",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-246956",
	                "name.minikube.sigs.k8s.io": "newest-cni-246956",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8a5d97091bf19b38ee04036b466bfe0ca58453d963145fa033cb5e31a360662f",
	            "SandboxKey": "/var/run/docker/netns/8a5d97091bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-246956": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cdd553080fb08dffc74b490be6da4ecef89ab2e05674ec3b26e123185e152840",
	                    "EndpointID": "41656faf69f939a45b3cbc0bf5f41f12c1fcdf81c4b183b81f5f41c845e47fca",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "6e:83:78:68:e5:cc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-246956",
	                        "69aebd25b47b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-246956 -n newest-cni-246956
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-246956 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-246956 logs -n 25: (1.248412702s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-094398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ delete  │ -p stopped-upgrade-196124                                                                                                                                                                                                                     │ stopped-upgrade-196124       │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-912564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-912564 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:35 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-912564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p no-preload-963457 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-497722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-497722 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ image   │ old-k8s-version-094398 image list --format=json                                                                                                                                                                                               │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ pause   │ -p old-k8s-version-094398 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p no-preload-963457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-497722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ image   │ embed-certs-912564 image list --format=json                                                                                                                                                                                                   │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p embed-certs-912564 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:36:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:36:56.118033  631392 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:36:56.118317  631392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:56.118328  631392 out.go:374] Setting ErrFile to fd 2...
	I1227 09:36:56.118332  631392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:56.118604  631392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:36:56.119089  631392 out.go:368] Setting JSON to false
	I1227 09:36:56.120292  631392 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4760,"bootTime":1766823456,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:36:56.120351  631392 start.go:143] virtualization: kvm guest
	I1227 09:36:56.122005  631392 out.go:179] * [default-k8s-diff-port-497722] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:36:56.123168  631392 notify.go:221] Checking for updates...
	I1227 09:36:56.123180  631392 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:36:56.124207  631392 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:36:56.125641  631392 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:56.126923  631392 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:36:56.127972  631392 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:36:56.129126  631392 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:36:56.130855  631392 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:56.131603  631392 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:36:56.156894  631392 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:36:56.156995  631392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:56.237033  631392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-27 09:36:56.225326698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:56.237183  631392 docker.go:319] overlay module found
	I1227 09:36:56.238784  631392 out.go:179] * Using the docker driver based on existing profile
	I1227 09:36:56.239920  631392 start.go:309] selected driver: docker
	I1227 09:36:56.239938  631392 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:56.240055  631392 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:36:56.240864  631392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:56.311407  631392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 09:36:56.301965993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:36:56.311684  631392 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:36:56.311714  631392 cni.go:84] Creating CNI manager for ""
	I1227 09:36:56.311779  631392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:56.311860  631392 start.go:353] cluster config:
	{Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:56.313709  631392 out.go:179] * Starting "default-k8s-diff-port-497722" primary control-plane node in "default-k8s-diff-port-497722" cluster
	I1227 09:36:56.314728  631392 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:36:56.319525  631392 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:36:51.503987  630355 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:36:51.504270  630355 start.go:159] libmachine.API.Create for "newest-cni-246956" (driver="docker")
	I1227 09:36:51.504305  630355 client.go:173] LocalClient.Create starting
	I1227 09:36:51.504380  630355 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:36:51.504418  630355 main.go:144] libmachine: Decoding PEM data...
	I1227 09:36:51.504445  630355 main.go:144] libmachine: Parsing certificate...
	I1227 09:36:51.504530  630355 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:36:51.504560  630355 main.go:144] libmachine: Decoding PEM data...
	I1227 09:36:51.504578  630355 main.go:144] libmachine: Parsing certificate...
	I1227 09:36:51.505013  630355 cli_runner.go:164] Run: docker network inspect newest-cni-246956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:36:51.521118  630355 cli_runner.go:211] docker network inspect newest-cni-246956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:36:51.521200  630355 network_create.go:284] running [docker network inspect newest-cni-246956] to gather additional debugging logs...
	I1227 09:36:51.521226  630355 cli_runner.go:164] Run: docker network inspect newest-cni-246956
	W1227 09:36:51.537389  630355 cli_runner.go:211] docker network inspect newest-cni-246956 returned with exit code 1
	I1227 09:36:51.537414  630355 network_create.go:287] error running [docker network inspect newest-cni-246956]: docker network inspect newest-cni-246956: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-246956 not found
	I1227 09:36:51.537439  630355 network_create.go:289] output of [docker network inspect newest-cni-246956]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-246956 not found
	
	** /stderr **
	I1227 09:36:51.537527  630355 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:51.553978  630355 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
	I1227 09:36:51.554821  630355 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-21a699476be6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:e8:d9:95:e6:36} reservation:<nil>}
	I1227 09:36:51.555324  630355 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e97c5356905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:d9:6b:42:f5:e3} reservation:<nil>}
	I1227 09:36:51.556124  630355 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e560a0}
	I1227 09:36:51.556148  630355 network_create.go:124] attempt to create docker network newest-cni-246956 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:36:51.556202  630355 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-246956 newest-cni-246956
	I1227 09:36:51.601256  630355 network_create.go:108] docker network newest-cni-246956 192.168.76.0/24 created
	I1227 09:36:51.601292  630355 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-246956" container
	I1227 09:36:51.601382  630355 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:36:51.618040  630355 cli_runner.go:164] Run: docker volume create newest-cni-246956 --label name.minikube.sigs.k8s.io=newest-cni-246956 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:36:51.634779  630355 oci.go:103] Successfully created a docker volume newest-cni-246956
	I1227 09:36:51.634906  630355 cli_runner.go:164] Run: docker run --rm --name newest-cni-246956-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-246956 --entrypoint /usr/bin/test -v newest-cni-246956:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:36:51.985470  630355 oci.go:107] Successfully prepared a docker volume newest-cni-246956
	I1227 09:36:51.985539  630355 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:51.985556  630355 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:36:51.985607  630355 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-246956:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:36:55.783686  630355 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-246956:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.798014883s)
	I1227 09:36:55.783722  630355 kic.go:203] duration metric: took 3.798163626s to extract preloaded images to volume ...
	W1227 09:36:55.783877  630355 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:36:55.783911  630355 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:36:55.783950  630355 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:36:55.845043  630355 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-246956 --name newest-cni-246956 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-246956 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-246956 --network newest-cni-246956 --ip 192.168.76.2 --volume newest-cni-246956:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:36:56.141349  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Running}}
	I1227 09:36:56.161926  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:36:56.186737  630355 cli_runner.go:164] Run: docker exec newest-cni-246956 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:36:56.251448  630355 oci.go:144] the created container "newest-cni-246956" has a running status.
	I1227 09:36:56.251484  630355 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa...
	I1227 09:36:56.320494  631392 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:56.320535  631392 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:36:56.320544  631392 cache.go:65] Caching tarball of preloaded images
	I1227 09:36:56.320642  631392 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:36:56.320635  631392 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:36:56.320657  631392 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:36:56.320859  631392 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/config.json ...
	I1227 09:36:56.345922  631392 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:36:56.345947  631392 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:36:56.345968  631392 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:36:56.346010  631392 start.go:360] acquireMachinesLock for default-k8s-diff-port-497722: {Name:mk952cc47ec82ed9310014186e6e4270fbb3e58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:36:56.346079  631392 start.go:364] duration metric: took 44.824µs to acquireMachinesLock for "default-k8s-diff-port-497722"
	I1227 09:36:56.346102  631392 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:36:56.346112  631392 fix.go:54] fixHost starting: 
	I1227 09:36:56.346414  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:36:56.365133  631392 fix.go:112] recreateIfNeeded on default-k8s-diff-port-497722: state=Stopped err=<nil>
	W1227 09:36:56.365221  631392 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:36:55.892570  629532 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:36:55.892596  629532 machine.go:97] duration metric: took 5.581836028s to provisionDockerMachine
	I1227 09:36:55.892610  629532 start.go:293] postStartSetup for "no-preload-963457" (driver="docker")
	I1227 09:36:55.892621  629532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:36:55.892671  629532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:36:55.892708  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:55.914280  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.011927  629532 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:36:56.015740  629532 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:36:56.015765  629532 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:36:56.015778  629532 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:36:56.015885  629532 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:36:56.015989  629532 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:36:56.016101  629532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:36:56.024943  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:56.046067  629532 start.go:296] duration metric: took 153.444971ms for postStartSetup
	I1227 09:36:56.046157  629532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:56.046226  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:56.065042  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.156611  629532 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:36:56.161967  629532 fix.go:56] duration metric: took 6.41436493s for fixHost
	I1227 09:36:56.161992  629532 start.go:83] releasing machines lock for "no-preload-963457", held for 6.414414383s
	I1227 09:36:56.162052  629532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-963457
	I1227 09:36:56.188154  629532 ssh_runner.go:195] Run: cat /version.json
	I1227 09:36:56.188215  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:56.188464  629532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:36:56.188765  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:56.223568  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.225022  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:56.390845  629532 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:56.399342  629532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:36:56.448678  629532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:36:56.454437  629532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:36:56.454505  629532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:36:56.464966  629532 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:36:56.464988  629532 start.go:496] detecting cgroup driver to use...
	I1227 09:36:56.465019  629532 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:36:56.465068  629532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:36:56.498904  629532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:36:56.522095  629532 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:36:56.522154  629532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:36:56.554225  629532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:36:56.572425  629532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:36:56.679708  629532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:36:56.789709  629532 docker.go:234] disabling docker service ...
	I1227 09:36:56.789778  629532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:36:56.806829  629532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:36:56.820513  629532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:36:56.923496  629532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:36:57.030200  629532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:36:57.043639  629532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:36:57.058019  629532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:36:57.058082  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.067538  629532 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:36:57.067598  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.077318  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.085917  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.094193  629532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:36:57.101639  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.110030  629532 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.117710  629532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:57.126967  629532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:36:57.133883  629532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:36:57.141132  629532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:57.224153  629532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:36:57.360012  629532 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:36:57.360088  629532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:36:57.364319  629532 start.go:574] Will wait 60s for crictl version
	I1227 09:36:57.364375  629532 ssh_runner.go:195] Run: which crictl
	I1227 09:36:57.367811  629532 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:36:57.391321  629532 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:36:57.391394  629532 ssh_runner.go:195] Run: crio --version
	I1227 09:36:57.421171  629532 ssh_runner.go:195] Run: crio --version
	I1227 09:36:57.452635  629532 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:36:57.453610  629532 cli_runner.go:164] Run: docker network inspect no-preload-963457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:57.471362  629532 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 09:36:57.475352  629532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:57.485498  629532 kubeadm.go:884] updating cluster {Name:no-preload-963457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-963457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:36:57.485606  629532 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:57.485644  629532 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:57.516604  629532 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:57.516626  629532 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:36:57.516634  629532 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 09:36:57.516744  629532 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-963457 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-963457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:36:57.516854  629532 ssh_runner.go:195] Run: crio config
	I1227 09:36:57.561627  629532 cni.go:84] Creating CNI manager for ""
	I1227 09:36:57.561649  629532 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:57.561667  629532 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:36:57.561699  629532 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-963457 NodeName:no-preload-963457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:36:57.561892  629532 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-963457"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:36:57.561977  629532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:36:57.570489  629532 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:36:57.570544  629532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:36:57.579475  629532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 09:36:57.592242  629532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:36:57.604718  629532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1227 09:36:57.617292  629532 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:36:57.621167  629532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:57.631391  629532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:57.717314  629532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:57.743061  629532 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457 for IP: 192.168.85.2
	I1227 09:36:57.743088  629532 certs.go:195] generating shared ca certs ...
	I1227 09:36:57.743111  629532 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:57.743279  629532 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:36:57.743330  629532 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:36:57.743343  629532 certs.go:257] generating profile certs ...
	I1227 09:36:57.743479  629532 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/client.key
	I1227 09:36:57.743563  629532 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key.7eac886d
	I1227 09:36:57.743621  629532 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.key
	I1227 09:36:57.743760  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:36:57.743831  629532 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:36:57.743845  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:36:57.743879  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:36:57.743916  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:36:57.743950  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:36:57.744006  629532 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:57.744846  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:36:57.763692  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:36:57.782669  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:36:57.803981  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:36:57.828529  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 09:36:57.848835  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:36:57.866897  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:36:57.883743  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/no-preload-963457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:36:57.900146  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:36:57.916751  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:36:57.934086  629532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:36:57.952366  629532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:36:57.966505  629532 ssh_runner.go:195] Run: openssl version
	I1227 09:36:57.975156  629532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:57.984628  629532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:36:57.993907  629532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:57.998878  629532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:57.998931  629532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:36:58.039453  629532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:36:58.046838  629532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.053745  629532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:36:58.060929  629532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.064401  629532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.064454  629532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:36:58.100242  629532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:36:58.107476  629532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.114303  629532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:36:58.122260  629532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.125672  629532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.125718  629532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:36:58.160416  629532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:36:58.167633  629532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:36:58.171634  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:36:58.211068  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:36:58.251576  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:36:58.300366  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:36:58.353707  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:36:58.409756  629532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:36:58.450831  629532 kubeadm.go:401] StartCluster: {Name:no-preload-963457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-963457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:36:58.450953  629532 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:36:58.451037  629532 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:36:58.486940  629532 cri.go:96] found id: "3e2d56e4ec07d845c73c075df87007dd294313f8f25d93e2e062eae21343461c"
	I1227 09:36:58.487007  629532 cri.go:96] found id: "0edde0ef003566fce4556ceb0a3d7cbc04d2cd6685f5afd803595f3ababb1338"
	I1227 09:36:58.487016  629532 cri.go:96] found id: "03b54f84cfa7e0b506b3122cd323cd0db22c3c4310cfedd9769eeb770ec9a426"
	I1227 09:36:58.487021  629532 cri.go:96] found id: "716a7952d1fa9945a526436df75297cbf883fb889ba62f53b3ae1e94790bfeaa"
	I1227 09:36:58.487067  629532 cri.go:96] found id: ""
	I1227 09:36:58.487122  629532 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:36:58.499274  629532 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:36:58Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:36:58.499327  629532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:36:58.507652  629532 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:36:58.507673  629532 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:36:58.507717  629532 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:36:58.515112  629532 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:36:58.515843  629532 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-963457" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:58.516271  629532 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-963457" cluster setting kubeconfig missing "no-preload-963457" context setting]
	I1227 09:36:58.516950  629532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:58.518808  629532 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:36:58.526405  629532 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 09:36:58.526432  629532 kubeadm.go:602] duration metric: took 18.753157ms to restartPrimaryControlPlane
	I1227 09:36:58.526441  629532 kubeadm.go:403] duration metric: took 75.626448ms to StartCluster
	I1227 09:36:58.526457  629532 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:58.526521  629532 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:36:58.527618  629532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:58.527872  629532 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:36:58.527997  629532 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:36:58.528107  629532 addons.go:70] Setting storage-provisioner=true in profile "no-preload-963457"
	I1227 09:36:58.528134  629532 addons.go:239] Setting addon storage-provisioner=true in "no-preload-963457"
	I1227 09:36:58.528133  629532 config.go:182] Loaded profile config "no-preload-963457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	W1227 09:36:58.528143  629532 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:36:58.528150  629532 addons.go:70] Setting dashboard=true in profile "no-preload-963457"
	I1227 09:36:58.528157  629532 addons.go:70] Setting default-storageclass=true in profile "no-preload-963457"
	I1227 09:36:58.528178  629532 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-963457"
	I1227 09:36:58.528184  629532 addons.go:239] Setting addon dashboard=true in "no-preload-963457"
	W1227 09:36:58.528193  629532 addons.go:248] addon dashboard should already be in state true
	I1227 09:36:58.528196  629532 host.go:66] Checking if "no-preload-963457" exists ...
	I1227 09:36:58.528219  629532 host.go:66] Checking if "no-preload-963457" exists ...
	I1227 09:36:58.528519  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.528685  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.528697  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.529707  629532 out.go:179] * Verifying Kubernetes components...
	I1227 09:36:58.530836  629532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:58.557409  629532 addons.go:239] Setting addon default-storageclass=true in "no-preload-963457"
	W1227 09:36:58.557440  629532 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:36:58.557472  629532 host.go:66] Checking if "no-preload-963457" exists ...
	I1227 09:36:58.558777  629532 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:36:58.558787  629532 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 09:36:58.559492  629532 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:36:58.562458  629532 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:36:56.489729  630355 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:36:56.526840  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:36:56.552039  630355 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:36:56.552068  630355 kic_runner.go:114] Args: [docker exec --privileged newest-cni-246956 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:36:56.617818  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:36:56.638019  630355 machine.go:94] provisionDockerMachine start ...
	I1227 09:36:56.638109  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:56.659481  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.659711  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:56.659723  630355 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:36:56.792984  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-246956
	
	I1227 09:36:56.793019  630355 ubuntu.go:182] provisioning hostname "newest-cni-246956"
	I1227 09:36:56.793088  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:56.815143  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.815483  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:56.815506  630355 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-246956 && echo "newest-cni-246956" | sudo tee /etc/hostname
	I1227 09:36:56.968737  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-246956
	
	I1227 09:36:56.968893  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:56.992239  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.992470  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:56.992489  630355 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-246956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-246956/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-246956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:36:57.122046  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:36:57.122079  630355 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:36:57.122127  630355 ubuntu.go:190] setting up certificates
	I1227 09:36:57.122138  630355 provision.go:84] configureAuth start
	I1227 09:36:57.122216  630355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:36:57.142307  630355 provision.go:143] copyHostCerts
	I1227 09:36:57.142360  630355 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:36:57.142370  630355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:36:57.142423  630355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:36:57.142512  630355 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:36:57.142521  630355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:36:57.142546  630355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:36:57.142616  630355 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:36:57.142623  630355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:36:57.142648  630355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:36:57.142706  630355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.newest-cni-246956 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-246956]
	I1227 09:36:57.212931  630355 provision.go:177] copyRemoteCerts
	I1227 09:36:57.212987  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:36:57.213033  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.230924  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.325527  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:36:57.343993  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 09:36:57.361059  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:36:57.378461  630355 provision.go:87] duration metric: took 256.298706ms to configureAuth
	I1227 09:36:57.378484  630355 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:36:57.378677  630355 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:57.378826  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.397931  630355 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:57.398243  630355 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1227 09:36:57.398266  630355 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:36:57.667097  630355 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:36:57.667129  630355 machine.go:97] duration metric: took 1.02908483s to provisionDockerMachine
	I1227 09:36:57.667142  630355 client.go:176] duration metric: took 6.162825502s to LocalClient.Create
	I1227 09:36:57.667182  630355 start.go:167] duration metric: took 6.162896704s to libmachine.API.Create "newest-cni-246956"
	I1227 09:36:57.667192  630355 start.go:293] postStartSetup for "newest-cni-246956" (driver="docker")
	I1227 09:36:57.667204  630355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:36:57.667353  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:36:57.667440  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.688032  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.781111  630355 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:36:57.785094  630355 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:36:57.785137  630355 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:36:57.785152  630355 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:36:57.785207  630355 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:36:57.785305  630355 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:36:57.785438  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:36:57.793222  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:36:57.817083  630355 start.go:296] duration metric: took 149.877387ms for postStartSetup
	I1227 09:36:57.817500  630355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:36:57.842720  630355 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/config.json ...
	I1227 09:36:57.842997  630355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:57.843039  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.861694  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.950266  630355 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:36:57.955115  630355 start.go:128] duration metric: took 6.45260447s to createHost
	I1227 09:36:57.955139  630355 start.go:83] releasing machines lock for "newest-cni-246956", held for 6.452757416s
	I1227 09:36:57.955207  630355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:36:57.976745  630355 ssh_runner.go:195] Run: cat /version.json
	I1227 09:36:57.976812  630355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:36:57.976893  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.976938  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:36:57.996141  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:57.997139  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:36:58.139611  630355 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:58.145675  630355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:36:58.181051  630355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:36:58.185484  630355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:36:58.185559  630355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:36:58.210594  630355 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 09:36:58.210618  630355 start.go:496] detecting cgroup driver to use...
	I1227 09:36:58.210653  630355 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:36:58.210713  630355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:36:58.227384  630355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:36:58.238872  630355 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:36:58.238929  630355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:36:58.260938  630355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:36:58.283057  630355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:36:58.414499  630355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:36:58.538586  630355 docker.go:234] disabling docker service ...
	I1227 09:36:58.538673  630355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:36:58.586101  630355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:36:58.605375  630355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:36:58.705180  630355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:36:58.819410  630355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:36:58.832661  630355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:36:58.850388  630355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:36:58.850452  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.862728  630355 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:36:58.862856  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.873915  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.883825  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.894396  630355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:36:58.903928  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.915832  630355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.931065  630355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:36:58.941511  630355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:36:58.950306  630355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:36:58.957971  630355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:59.055197  630355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:36:59.200763  630355 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:36:59.200870  630355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:36:59.205226  630355 start.go:574] Will wait 60s for crictl version
	I1227 09:36:59.205294  630355 ssh_runner.go:195] Run: which crictl
	I1227 09:36:59.209253  630355 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:36:59.235124  630355 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:36:59.235211  630355 ssh_runner.go:195] Run: crio --version
	I1227 09:36:59.266439  630355 ssh_runner.go:195] Run: crio --version
	I1227 09:36:59.304407  630355 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:36:59.305344  630355 cli_runner.go:164] Run: docker network inspect newest-cni-246956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:36:59.325345  630355 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:36:59.329616  630355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:59.343894  630355 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 09:36:58.562497  629532 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:58.562512  629532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:36:58.562565  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:58.563435  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:36:58.563457  629532 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:36:58.563515  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:58.586081  629532 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:58.586106  629532 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:36:58.586165  629532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:36:58.597636  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:58.600166  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:58.615813  629532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:36:58.683845  629532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:58.696212  629532 node_ready.go:35] waiting up to 6m0s for node "no-preload-963457" to be "Ready" ...
	I1227 09:36:58.716769  629532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:36:58.717072  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:36:58.717091  629532 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:36:58.722474  629532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:36:58.732902  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:36:58.732921  629532 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:36:58.756653  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:36:58.756700  629532 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:36:58.775246  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:36:58.775273  629532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:36:58.791178  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:36:58.791220  629532 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:36:58.806784  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:36:58.806865  629532 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:36:58.821301  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:36:58.821323  629532 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:36:58.835038  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:36:58.835059  629532 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:36:58.851360  629532 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:36:58.851383  629532 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:36:58.866009  629532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1227 09:36:56.752824  622335 pod_ready.go:104] pod "coredns-7d764666f9-vm5hp" is not "Ready", error: <nil>
	I1227 09:36:58.252524  622335 pod_ready.go:94] pod "coredns-7d764666f9-vm5hp" is "Ready"
	I1227 09:36:58.252556  622335 pod_ready.go:86] duration metric: took 32.507379919s for pod "coredns-7d764666f9-vm5hp" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.255267  622335 pod_ready.go:83] waiting for pod "etcd-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.260388  622335 pod_ready.go:94] pod "etcd-embed-certs-912564" is "Ready"
	I1227 09:36:58.260428  622335 pod_ready.go:86] duration metric: took 5.133413ms for pod "etcd-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.263042  622335 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.269170  622335 pod_ready.go:94] pod "kube-apiserver-embed-certs-912564" is "Ready"
	I1227 09:36:58.269195  622335 pod_ready.go:86] duration metric: took 6.12908ms for pod "kube-apiserver-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.271334  622335 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.451056  622335 pod_ready.go:94] pod "kube-controller-manager-embed-certs-912564" is "Ready"
	I1227 09:36:58.451082  622335 pod_ready.go:86] duration metric: took 179.728256ms for pod "kube-controller-manager-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:58.650121  622335 pod_ready.go:83] waiting for pod "kube-proxy-dv8ch" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.050144  622335 pod_ready.go:94] pod "kube-proxy-dv8ch" is "Ready"
	I1227 09:36:59.050170  622335 pod_ready.go:86] duration metric: took 400.019705ms for pod "kube-proxy-dv8ch" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.249302  622335 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.649616  622335 pod_ready.go:94] pod "kube-scheduler-embed-certs-912564" is "Ready"
	I1227 09:36:59.649652  622335 pod_ready.go:86] duration metric: took 400.318884ms for pod "kube-scheduler-embed-certs-912564" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:36:59.649667  622335 pod_ready.go:40] duration metric: took 33.907675392s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:36:59.704219  622335 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:36:59.705864  622335 out.go:179] * Done! kubectl is now configured to use "embed-certs-912564" cluster and "default" namespace by default
	I1227 09:36:59.788339  629532 node_ready.go:49] node "no-preload-963457" is "Ready"
	I1227 09:36:59.788374  629532 node_ready.go:38] duration metric: took 1.092117451s for node "no-preload-963457" to be "Ready" ...
	I1227 09:36:59.788394  629532 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:36:59.788452  629532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:37:00.485897  629532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.769088985s)
	I1227 09:37:00.485927  629532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.763421495s)
	I1227 09:37:00.486068  629532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.620019639s)
	I1227 09:37:00.486186  629532 api_server.go:72] duration metric: took 1.958263237s to wait for apiserver process to appear ...
	I1227 09:37:00.486206  629532 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:37:00.486232  629532 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:37:00.488270  629532 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-963457 addons enable metrics-server
	
	I1227 09:37:00.491676  629532 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:37:00.491700  629532 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:37:00.493717  629532 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:36:59.344975  630355 kubeadm.go:884] updating cluster {Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:36:59.345101  630355 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:36:59.345149  630355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:59.384759  630355 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:59.384782  630355 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:36:59.384849  630355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:36:59.410055  630355 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:36:59.410078  630355 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:36:59.410088  630355 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 09:36:59.410204  630355 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-246956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:36:59.410294  630355 ssh_runner.go:195] Run: crio config
	I1227 09:36:59.456322  630355 cni.go:84] Creating CNI manager for ""
	I1227 09:36:59.456350  630355 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:36:59.456368  630355 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 09:36:59.456397  630355 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-246956 NodeName:newest-cni-246956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:36:59.456523  630355 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-246956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:36:59.456584  630355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:36:59.466669  630355 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:36:59.466742  630355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:36:59.475652  630355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 09:36:59.488517  630355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:36:59.502920  630355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1227 09:36:59.515008  630355 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:36:59.518524  630355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:36:59.528038  630355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:36:59.624983  630355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:36:59.660589  630355 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956 for IP: 192.168.76.2
	I1227 09:36:59.660613  630355 certs.go:195] generating shared ca certs ...
	I1227 09:36:59.660633  630355 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:59.660905  630355 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:36:59.661015  630355 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:36:59.661035  630355 certs.go:257] generating profile certs ...
	I1227 09:36:59.661115  630355 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.key
	I1227 09:36:59.661143  630355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.crt with IP's: []
	I1227 09:36:59.788963  630355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.crt ...
	I1227 09:36:59.789056  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.crt: {Name:mke160e795be5819fc64a4cfdc99d30cbaf7ac78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:59.789341  630355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.key ...
	I1227 09:36:59.789401  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.key: {Name:mked296971a2b1adfd827807ea9bcfac542a6198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:36:59.789603  630355 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc
	I1227 09:36:59.789628  630355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:37:00.007987  630355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc ...
	I1227 09:37:00.008015  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc: {Name:mkf106bea43ddce33073679b38a2435ae123204d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.008208  630355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc ...
	I1227 09:37:00.008231  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc: {Name:mk7a7a619839a917a7bc295106055593f103712f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.008348  630355 certs.go:382] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt.c99eabfc -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt
	I1227 09:37:00.008443  630355 certs.go:386] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key
	I1227 09:37:00.008507  630355 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key
	I1227 09:37:00.008521  630355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt with IP's: []
	I1227 09:37:00.065399  630355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt ...
	I1227 09:37:00.065432  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt: {Name:mk53170e13da64d8c60c92c2979a2d1722947a2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.065641  630355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key ...
	I1227 09:37:00.065669  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key: {Name:mk035fe76dc91ae603b8c29c1b707b2402dd30b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:00.065955  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:00.066015  630355 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:00.066033  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:00.066073  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:00.066115  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:00.066153  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:00.066212  630355 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:00.066935  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:00.090536  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:00.114426  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:00.139622  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:00.168025  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 09:37:00.199153  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:00.230695  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:00.258683  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:37:00.286125  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:00.307110  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:00.329272  630355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:00.353282  630355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:00.370493  630355 ssh_runner.go:195] Run: openssl version
	I1227 09:37:00.378974  630355 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.386715  630355 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:00.395766  630355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.400298  630355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.400359  630355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:00.438423  630355 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:00.446991  630355 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3771712.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:00.455583  630355 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.464747  630355 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:00.475422  630355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.480200  630355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.480263  630355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:00.528725  630355 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:00.536448  630355 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:37:00.543666  630355 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.550688  630355 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:00.557833  630355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.561504  630355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.561559  630355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:00.597670  630355 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:00.605630  630355 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/377171.pem /etc/ssl/certs/51391683.0
	I1227 09:37:00.613503  630355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:00.617430  630355 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:37:00.617489  630355 kubeadm.go:401] StartCluster: {Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:00.617574  630355 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:00.617633  630355 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:00.648921  630355 cri.go:96] found id: ""
	I1227 09:37:00.648987  630355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:00.657821  630355 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:37:00.665951  630355 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:37:00.666012  630355 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:37:00.674118  630355 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:37:00.674134  630355 kubeadm.go:158] found existing configuration files:
	
	I1227 09:37:00.674176  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:37:00.681708  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:37:00.681768  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:37:00.689291  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:37:00.697145  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:37:00.697203  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:37:00.705396  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:37:00.714137  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:37:00.714200  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:37:00.723361  630355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:37:00.732948  630355 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:37:00.733004  630355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:37:00.741856  630355 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:37:00.788203  630355 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:37:00.788305  630355 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:37:00.868108  630355 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:37:00.868200  630355 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 09:37:00.868288  630355 kubeadm.go:319] OS: Linux
	I1227 09:37:00.868365  630355 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:37:00.868440  630355 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:37:00.868658  630355 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:37:00.868734  630355 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:37:00.868816  630355 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:37:00.868893  630355 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:37:00.868964  630355 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:37:00.869016  630355 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 09:37:00.945222  630355 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:37:00.945395  630355 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:37:00.945534  630355 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:37:00.953623  630355 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:36:56.366926  631392 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-497722" ...
	I1227 09:36:56.367007  631392 cli_runner.go:164] Run: docker start default-k8s-diff-port-497722
	I1227 09:36:56.696501  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:36:56.724853  631392 kic.go:430] container "default-k8s-diff-port-497722" state is running.
	I1227 09:36:56.725355  631392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:36:56.748319  631392 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/config.json ...
	I1227 09:36:56.748736  631392 machine.go:94] provisionDockerMachine start ...
	I1227 09:36:56.748860  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:36:56.769479  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:56.769885  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:36:56.769906  631392 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:36:56.770811  631392 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56300->127.0.0.1:33468: read: connection reset by peer
	I1227 09:36:59.952704  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-497722
	
	I1227 09:36:59.952731  631392 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-497722"
	I1227 09:36:59.952803  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:36:59.977726  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:36:59.978041  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:36:59.978072  631392 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-497722 && echo "default-k8s-diff-port-497722" | sudo tee /etc/hostname
	I1227 09:37:00.132462  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-497722
	
	I1227 09:37:00.132551  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:00.162410  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:00.162741  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:37:00.162763  631392 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-497722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-497722/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-497722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:37:00.320889  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:37:00.321032  631392 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:37:00.321059  631392 ubuntu.go:190] setting up certificates
	I1227 09:37:00.321086  631392 provision.go:84] configureAuth start
	I1227 09:37:00.321152  631392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:37:00.345021  631392 provision.go:143] copyHostCerts
	I1227 09:37:00.345085  631392 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:37:00.345108  631392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:37:00.345193  631392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:37:00.345342  631392 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:37:00.345358  631392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:37:00.345408  631392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:37:00.345527  631392 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:37:00.345542  631392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:37:00.345633  631392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:37:00.345740  631392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-497722 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-497722 localhost minikube]
	I1227 09:37:00.442570  631392 provision.go:177] copyRemoteCerts
	I1227 09:37:00.442624  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:37:00.442658  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:00.466058  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:00.564061  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:37:00.581525  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 09:37:00.598744  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:37:00.617306  631392 provision.go:87] duration metric: took 296.195744ms to configureAuth
	I1227 09:37:00.617333  631392 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:37:00.617559  631392 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:00.617677  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:00.639063  631392 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:00.639350  631392 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1227 09:37:00.639374  631392 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:37:00.992976  631392 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:37:00.993002  631392 machine.go:97] duration metric: took 4.244244423s to provisionDockerMachine
	I1227 09:37:00.993015  631392 start.go:293] postStartSetup for "default-k8s-diff-port-497722" (driver="docker")
	I1227 09:37:00.993027  631392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:37:00.993100  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:37:00.993147  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.014724  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.108368  631392 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:37:01.111898  631392 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:37:01.111923  631392 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:37:01.111934  631392 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:37:01.111974  631392 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:37:01.112059  631392 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:37:01.112148  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:37:00.957070  630355 out.go:252]   - Generating certificates and keys ...
	I1227 09:37:00.957171  630355 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:37:00.957258  630355 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:37:01.046705  630355 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:37:01.170369  630355 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:37:01.237679  630355 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:37:01.119477  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:01.136366  631392 start.go:296] duration metric: took 143.321557ms for postStartSetup
	I1227 09:37:01.136451  631392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:37:01.136488  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.156365  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.245668  631392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:37:01.250596  631392 fix.go:56] duration metric: took 4.904478049s for fixHost
	I1227 09:37:01.250623  631392 start.go:83] releasing machines lock for "default-k8s-diff-port-497722", held for 4.904529799s
	I1227 09:37:01.250702  631392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-497722
	I1227 09:37:01.270598  631392 ssh_runner.go:195] Run: cat /version.json
	I1227 09:37:01.270653  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.270708  631392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:37:01.270815  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:01.289307  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.291072  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:01.376675  631392 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:01.432189  631392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:37:01.470826  631392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:37:01.475338  631392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:37:01.475421  631392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:37:01.483037  631392 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:37:01.483059  631392 start.go:496] detecting cgroup driver to use...
	I1227 09:37:01.483091  631392 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:37:01.483133  631392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:37:01.498478  631392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:37:01.511191  631392 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:37:01.511242  631392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:37:01.526955  631392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:37:01.540540  631392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:37:01.622776  631392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:37:01.703107  631392 docker.go:234] disabling docker service ...
	I1227 09:37:01.703196  631392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:37:01.717309  631392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:37:01.729344  631392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:37:01.840554  631392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:37:01.935549  631392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:37:01.947578  631392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:37:01.961324  631392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:37:01.961377  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.970001  631392 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:37:01.970098  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.979093  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.989166  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:01.998100  631392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:37:02.007297  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:02.017288  631392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:02.025859  631392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:02.035574  631392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:37:02.043526  631392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:37:02.050579  631392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:02.135042  631392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:37:02.277022  631392 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:37:02.277093  631392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:37:02.280991  631392 start.go:574] Will wait 60s for crictl version
	I1227 09:37:02.281049  631392 ssh_runner.go:195] Run: which crictl
	I1227 09:37:02.284712  631392 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:37:02.310427  631392 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:37:02.310505  631392 ssh_runner.go:195] Run: crio --version
	I1227 09:37:02.339457  631392 ssh_runner.go:195] Run: crio --version
	I1227 09:37:02.369305  631392 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:37:01.372389  630355 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:37:01.413174  630355 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:37:01.413305  630355 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-246956] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:37:01.527664  630355 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:37:01.527805  630355 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-246956] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:37:01.615200  630355 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:37:01.840816  630355 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:37:02.017322  630355 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:37:02.017649  630355 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:37:02.264512  630355 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:37:02.566832  630355 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:37:02.683014  630355 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:37:02.757540  630355 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:37:02.820765  630355 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:37:02.821428  630355 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:37:02.825397  630355 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:37:02.370401  631392 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-497722 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:02.389071  631392 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 09:37:02.393241  631392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:02.403448  631392 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:37:02.403574  631392 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:02.403630  631392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:02.438551  631392 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:02.438590  631392 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:37:02.438667  631392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:02.465209  631392 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:02.465235  631392 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:37:02.465245  631392 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.35.0 crio true true} ...
	I1227 09:37:02.465366  631392 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-497722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:37:02.465461  631392 ssh_runner.go:195] Run: crio config
	I1227 09:37:02.513257  631392 cni.go:84] Creating CNI manager for ""
	I1227 09:37:02.513278  631392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:02.513294  631392 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:37:02.513317  631392 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-497722 NodeName:default-k8s-diff-port-497722 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:37:02.513444  631392 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-497722"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:37:02.513505  631392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:37:02.521923  631392 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:37:02.521985  631392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:37:02.529554  631392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1227 09:37:02.543277  631392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:37:02.555132  631392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1227 09:37:02.567822  631392 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:37:02.571320  631392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:02.580644  631392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:02.664822  631392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:02.693404  631392 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722 for IP: 192.168.103.2
	I1227 09:37:02.693441  631392 certs.go:195] generating shared ca certs ...
	I1227 09:37:02.693462  631392 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:02.693637  631392 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:37:02.693699  631392 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:37:02.693714  631392 certs.go:257] generating profile certs ...
	I1227 09:37:02.693848  631392 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/client.key
	I1227 09:37:02.693949  631392 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/apiserver.key.70f960dd
	I1227 09:37:02.694002  631392 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/proxy-client.key
	I1227 09:37:02.694163  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:02.694205  631392 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:02.694217  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:02.694258  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:02.694290  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:02.694323  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:02.694385  631392 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:02.695781  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:02.717703  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:02.740004  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:02.760168  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:02.785274  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 09:37:02.808608  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:02.826033  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:02.851489  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/default-k8s-diff-port-497722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:37:02.871940  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:02.895485  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:02.913955  631392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:02.930694  631392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:02.945635  631392 ssh_runner.go:195] Run: openssl version
	I1227 09:37:02.953715  631392 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:02.961830  631392 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:02.969519  631392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:02.973176  631392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:02.973234  631392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:03.016770  631392 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:03.024510  631392 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.031699  631392 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:03.039836  631392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.043468  631392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.043517  631392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:03.079383  631392 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:03.086880  631392 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.094441  631392 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:03.102480  631392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.105974  631392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.106034  631392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:03.140053  631392 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:03.147121  631392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:03.150759  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:37:03.185667  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:37:03.239684  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:37:03.282935  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:37:03.332349  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:37:03.387948  631392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:37:03.433087  631392 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-497722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-497722 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:03.433200  631392 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:03.433280  631392 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:03.466372  631392 cri.go:96] found id: "f7a05d26251e8a9f4091116e036a79a8654a182636649e96285fc252c0530199"
	I1227 09:37:03.466401  631392 cri.go:96] found id: "05ea82c1bf330e30a293e6d7ea1c01b1766ebce50eeaf8868bbcf622fd71d8e8"
	I1227 09:37:03.466408  631392 cri.go:96] found id: "0c8a5140613f466c8107ca22c4400874507b9d96db9fc14bb7f9ecf967957942"
	I1227 09:37:03.466413  631392 cri.go:96] found id: "0f35c56ae0629feaaf5192c69a9a652f101e67591c4c93f500daa6ceb2a62911"
	I1227 09:37:03.466417  631392 cri.go:96] found id: ""
	I1227 09:37:03.466465  631392 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:37:03.479438  631392 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:03Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:37:03.479541  631392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:03.489354  631392 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:37:03.489372  631392 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:37:03.489431  631392 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:37:03.497550  631392 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:37:03.498709  631392 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-497722" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:03.499508  631392 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-497722" cluster setting kubeconfig missing "default-k8s-diff-port-497722" context setting]
	I1227 09:37:03.500656  631392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:03.503000  631392 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:37:03.510898  631392 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1227 09:37:03.510931  631392 kubeadm.go:602] duration metric: took 21.551771ms to restartPrimaryControlPlane
	I1227 09:37:03.510940  631392 kubeadm.go:403] duration metric: took 77.869263ms to StartCluster
	I1227 09:37:03.510958  631392 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:03.511015  631392 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:03.512421  631392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:03.512668  631392 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:03.512733  631392 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:37:03.512865  631392 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-497722"
	I1227 09:37:03.512886  631392 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:03.512903  631392 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-497722"
	I1227 09:37:03.512924  631392 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-497722"
	I1227 09:37:03.512892  631392 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-497722"
	W1227 09:37:03.512933  631392 addons.go:248] addon dashboard should already be in state true
	W1227 09:37:03.512937  631392 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:37:03.512934  631392 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-497722"
	I1227 09:37:03.512963  631392 host.go:66] Checking if "default-k8s-diff-port-497722" exists ...
	I1227 09:37:03.512991  631392 host.go:66] Checking if "default-k8s-diff-port-497722" exists ...
	I1227 09:37:03.512985  631392 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-497722"
	I1227 09:37:03.513292  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.513452  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.513483  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.515099  631392 out.go:179] * Verifying Kubernetes components...
	I1227 09:37:03.516218  631392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:03.545006  631392 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-497722"
	W1227 09:37:03.545032  631392 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:37:03.545065  631392 host.go:66] Checking if "default-k8s-diff-port-497722" exists ...
	I1227 09:37:03.545116  631392 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:37:03.545116  631392 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 09:37:03.545523  631392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:03.546781  631392 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:03.546811  631392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:37:03.546873  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:03.547729  631392 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:37:00.496152  629532 addons.go:530] duration metric: took 1.96813915s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:37:00.986939  629532 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:37:00.991714  629532 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 09:37:00.992754  629532 api_server.go:141] control plane version: v1.35.0
	I1227 09:37:00.992785  629532 api_server.go:131] duration metric: took 506.570525ms to wait for apiserver health ...
	I1227 09:37:00.992822  629532 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:37:00.997469  629532 system_pods.go:59] 8 kube-system pods found
	I1227 09:37:00.999467  629532 system_pods.go:61] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:00.999484  629532 system_pods.go:61] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:00.999494  629532 system_pods.go:61] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:37:00.999504  629532 system_pods.go:61] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:00.999520  629532 system_pods.go:61] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:00.999526  629532 system_pods.go:61] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:37:00.999537  629532 system_pods.go:61] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:00.999542  629532 system_pods.go:61] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Running
	I1227 09:37:00.999552  629532 system_pods.go:74] duration metric: took 6.721122ms to wait for pod list to return data ...
	I1227 09:37:00.999566  629532 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:37:01.002442  629532 default_sa.go:45] found service account: "default"
	I1227 09:37:01.002467  629532 default_sa.go:55] duration metric: took 2.893913ms for default service account to be created ...
	I1227 09:37:01.002476  629532 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:37:01.005574  629532 system_pods.go:86] 8 kube-system pods found
	I1227 09:37:01.005606  629532 system_pods.go:89] "coredns-7d764666f9-wnzhx" [2152780b-b980-4e4a-b652-9cd0ec857a2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:01.005617  629532 system_pods.go:89] "etcd-no-preload-963457" [ecf5237d-e6e0-4cee-977a-e2356ec1db8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:01.005636  629532 system_pods.go:89] "kindnet-7kw8b" [36ed3b47-67e2-483b-9563-f3366df2f0c5] Running
	I1227 09:37:01.005647  629532 system_pods.go:89] "kube-apiserver-no-preload-963457" [95e8b881-2ba5-464a-951b-8cfe1d65f33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:01.005659  629532 system_pods.go:89] "kube-controller-manager-no-preload-963457" [2f715235-2f6e-414e-911b-d6ee55e67a06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:01.005669  629532 system_pods.go:89] "kube-proxy-grkqs" [6f0a5a48-9159-478a-8949-827103a7c85c] Running
	I1227 09:37:01.005678  629532 system_pods.go:89] "kube-scheduler-no-preload-963457" [f5b9da44-e7d5-4901-b9c6-2315b873d53a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:01.005686  629532 system_pods.go:89] "storage-provisioner" [3de40d14-cc88-44f8-a071-caea798ff465] Running
	I1227 09:37:01.005694  629532 system_pods.go:126] duration metric: took 3.211017ms to wait for k8s-apps to be running ...
	I1227 09:37:01.005703  629532 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:37:01.005745  629532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:01.019329  629532 system_svc.go:56] duration metric: took 13.615658ms WaitForService to wait for kubelet
	I1227 09:37:01.019357  629532 kubeadm.go:587] duration metric: took 2.491446215s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:37:01.019406  629532 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:37:01.022469  629532 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:37:01.022494  629532 node_conditions.go:123] node cpu capacity is 8
	I1227 09:37:01.022508  629532 node_conditions.go:105] duration metric: took 3.09307ms to run NodePressure ...
	I1227 09:37:01.022519  629532 start.go:242] waiting for startup goroutines ...
	I1227 09:37:01.022526  629532 start.go:247] waiting for cluster config update ...
	I1227 09:37:01.022539  629532 start.go:256] writing updated cluster config ...
	I1227 09:37:01.022825  629532 ssh_runner.go:195] Run: rm -f paused
	I1227 09:37:01.026415  629532 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:37:01.032367  629532 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:37:03.037599  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	I1227 09:37:03.548864  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:37:03.548881  631392 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:37:03.548946  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:03.580601  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:03.585910  631392 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:03.585995  631392 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:37:03.586083  631392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:03.587524  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:03.611367  631392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:03.693388  631392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:03.735280  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:37:03.735311  631392 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:37:03.739715  631392 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-497722" to be "Ready" ...
	I1227 09:37:03.745369  631392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:03.756381  631392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:03.772524  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:37:03.772552  631392 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:37:03.796801  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:37:03.796827  631392 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:37:03.830005  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:37:03.830031  631392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:37:03.859155  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:37:03.859261  631392 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:37:03.882955  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:37:03.883028  631392 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:37:03.903109  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:37:03.903135  631392 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:37:03.920352  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:37:03.920389  631392 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:37:03.938041  631392 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:37:03.938067  631392 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:37:03.952702  631392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:37:05.081740  631392 node_ready.go:49] node "default-k8s-diff-port-497722" is "Ready"
	I1227 09:37:05.081784  631392 node_ready.go:38] duration metric: took 1.342025698s for node "default-k8s-diff-port-497722" to be "Ready" ...
	I1227 09:37:05.081817  631392 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:37:05.081879  631392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:37:05.822291  631392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.076880271s)
	I1227 09:37:05.822373  631392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.065940093s)
	I1227 09:37:05.822447  631392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.869714406s)
	I1227 09:37:05.822496  631392 api_server.go:72] duration metric: took 2.309795438s to wait for apiserver process to appear ...
	I1227 09:37:05.822521  631392 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:37:05.822603  631392 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1227 09:37:05.823594  631392 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-497722 addons enable metrics-server
	
	I1227 09:37:05.828585  631392 api_server.go:325] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:37:05.828612  631392 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:37:05.833008  631392 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:37:05.834729  631392 addons.go:530] duration metric: took 2.322002327s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:37:02.827049  630355 out.go:252]   - Booting up control plane ...
	I1227 09:37:02.827173  630355 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:37:02.827659  630355 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:37:02.828658  630355 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:37:02.846003  630355 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:37:02.846144  630355 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:37:02.856948  630355 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:37:02.857081  630355 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:37:02.857138  630355 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:37:02.965432  630355 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:37:02.965600  630355 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:37:03.468145  630355 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.896581ms
	I1227 09:37:03.472302  630355 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 09:37:03.472420  630355 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1227 09:37:03.472580  630355 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 09:37:03.472737  630355 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 09:37:04.484316  630355 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.011409443s
	I1227 09:37:05.492077  630355 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.019567088s
	I1227 09:37:06.973882  630355 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501519063s
	I1227 09:37:06.989241  630355 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 09:37:06.997155  630355 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 09:37:07.005486  630355 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 09:37:07.005698  630355 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-246956 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 09:37:07.013098  630355 kubeadm.go:319] [bootstrap-token] Using token: kw0ne7.e0ofxhotwu7t62i6
	I1227 09:37:07.014365  630355 out.go:252]   - Configuring RBAC rules ...
	I1227 09:37:07.014525  630355 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 09:37:07.017519  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 09:37:07.022777  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 09:37:07.025010  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 09:37:07.028326  630355 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 09:37:07.030560  630355 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 09:37:07.380628  630355 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 09:37:07.798585  630355 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 09:37:08.380637  630355 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 09:37:08.381938  630355 kubeadm.go:319] 
	I1227 09:37:08.382044  630355 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 09:37:08.382060  630355 kubeadm.go:319] 
	I1227 09:37:08.382154  630355 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 09:37:08.382182  630355 kubeadm.go:319] 
	I1227 09:37:08.382220  630355 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 09:37:08.382289  630355 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 09:37:08.382354  630355 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 09:37:08.382366  630355 kubeadm.go:319] 
	I1227 09:37:08.382438  630355 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 09:37:08.382449  630355 kubeadm.go:319] 
	I1227 09:37:08.382507  630355 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 09:37:08.382522  630355 kubeadm.go:319] 
	I1227 09:37:08.382996  630355 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 09:37:08.383196  630355 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 09:37:08.383307  630355 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 09:37:08.383318  630355 kubeadm.go:319] 
	I1227 09:37:08.383460  630355 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 09:37:08.383582  630355 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 09:37:08.383593  630355 kubeadm.go:319] 
	I1227 09:37:08.383722  630355 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kw0ne7.e0ofxhotwu7t62i6 \
	I1227 09:37:08.383907  630355 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 \
	I1227 09:37:08.383937  630355 kubeadm.go:319] 	--control-plane 
	I1227 09:37:08.383942  630355 kubeadm.go:319] 
	I1227 09:37:08.384084  630355 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 09:37:08.384101  630355 kubeadm.go:319] 
	I1227 09:37:08.384257  630355 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kw0ne7.e0ofxhotwu7t62i6 \
	I1227 09:37:08.384453  630355 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 
	I1227 09:37:08.387345  630355 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 09:37:08.387526  630355 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:37:08.387558  630355 cni.go:84] Creating CNI manager for ""
	I1227 09:37:08.387571  630355 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:08.389255  630355 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1227 09:37:05.046412  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:07.538510  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:09.541062  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	I1227 09:37:06.322979  631392 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1227 09:37:06.328435  631392 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1227 09:37:06.329947  631392 api_server.go:141] control plane version: v1.35.0
	I1227 09:37:06.329981  631392 api_server.go:131] duration metric: took 507.399256ms to wait for apiserver health ...
	I1227 09:37:06.329993  631392 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:37:06.333774  631392 system_pods.go:59] 8 kube-system pods found
	I1227 09:37:06.333837  631392 system_pods.go:61] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:06.333861  631392 system_pods.go:61] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:06.333877  631392 system_pods.go:61] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:37:06.333888  631392 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:06.333905  631392 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:06.333918  631392 system_pods.go:61] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:37:06.333929  631392 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:06.333937  631392 system_pods.go:61] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:37:06.333948  631392 system_pods.go:74] duration metric: took 3.947114ms to wait for pod list to return data ...
	I1227 09:37:06.333961  631392 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:37:06.336565  631392 default_sa.go:45] found service account: "default"
	I1227 09:37:06.336587  631392 default_sa.go:55] duration metric: took 2.617601ms for default service account to be created ...
	I1227 09:37:06.336597  631392 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:37:06.339330  631392 system_pods.go:86] 8 kube-system pods found
	I1227 09:37:06.339360  631392 system_pods.go:89] "coredns-7d764666f9-wfv5r" [c9445108-899a-4589-9501-4ffa7cd80a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:37:06.339372  631392 system_pods.go:89] "etcd-default-k8s-diff-port-497722" [5e51d021-5ac6-42ab-82c9-2f3db25a111e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:06.339386  631392 system_pods.go:89] "kindnet-rd4dj" [c9a44ecf-3860-4022-bcbe-25cfdb86502a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:37:06.339401  631392 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-497722" [fc2ed6d4-1f2c-4e4d-9a59-07b9ef712893] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:06.339414  631392 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-497722" [c56ec738-10be-417c-808c-c65189f7a6a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:06.339422  631392 system_pods.go:89] "kube-proxy-6z4vt" [25c2458e-d68a-488d-803c-80e0c6191bad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:37:06.339436  631392 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-497722" [fccea7b8-16e7-488f-b2a4-3d46c2d108d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:06.339450  631392 system_pods.go:89] "storage-provisioner" [c84aab32-9d34-4b1d-a3ee-813926808b75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:37:06.339460  631392 system_pods.go:126] duration metric: took 2.854974ms to wait for k8s-apps to be running ...
	I1227 09:37:06.339469  631392 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:37:06.339521  631392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:06.357773  631392 system_svc.go:56] duration metric: took 18.294659ms WaitForService to wait for kubelet
	I1227 09:37:06.357818  631392 kubeadm.go:587] duration metric: took 2.845118615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:37:06.357845  631392 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:37:06.360952  631392 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:37:06.360981  631392 node_conditions.go:123] node cpu capacity is 8
	I1227 09:37:06.361002  631392 node_conditions.go:105] duration metric: took 3.142212ms to run NodePressure ...
	I1227 09:37:06.361018  631392 start.go:242] waiting for startup goroutines ...
	I1227 09:37:06.361047  631392 start.go:247] waiting for cluster config update ...
	I1227 09:37:06.361060  631392 start.go:256] writing updated cluster config ...
	I1227 09:37:06.361365  631392 ssh_runner.go:195] Run: rm -f paused
	I1227 09:37:06.366245  631392 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:37:06.370043  631392 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:37:08.375981  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	W1227 09:37:10.379441  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	I1227 09:37:08.390254  630355 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 09:37:08.395438  630355 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 09:37:08.395460  630355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 09:37:08.411692  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 09:37:08.701041  630355 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 09:37:08.701214  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:08.701332  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-246956 minikube.k8s.io/updated_at=2025_12_27T09_37_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=newest-cni-246956 minikube.k8s.io/primary=true
	I1227 09:37:08.716553  630355 ops.go:34] apiserver oom_adj: -16
	I1227 09:37:08.804282  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:09.304609  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:09.804478  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:10.305050  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:10.804402  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:11.304767  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:11.804674  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:12.305069  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:12.805104  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:13.304712  630355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:37:13.404981  630355 kubeadm.go:1114] duration metric: took 4.703817064s to wait for elevateKubeSystemPrivileges
	I1227 09:37:13.405019  630355 kubeadm.go:403] duration metric: took 12.787533089s to StartCluster
	I1227 09:37:13.405045  630355 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:13.405126  630355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:13.407805  630355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:13.408105  630355 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:13.408219  630355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 09:37:13.408237  630355 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:37:13.408318  630355 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-246956"
	I1227 09:37:13.408346  630355 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-246956"
	I1227 09:37:13.408385  630355 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:13.408436  630355 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:13.408377  630355 addons.go:70] Setting default-storageclass=true in profile "newest-cni-246956"
	I1227 09:37:13.408494  630355 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-246956"
	I1227 09:37:13.408941  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:13.408985  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:13.410120  630355 out.go:179] * Verifying Kubernetes components...
	I1227 09:37:13.411386  630355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:13.436409  630355 addons.go:239] Setting addon default-storageclass=true in "newest-cni-246956"
	I1227 09:37:13.436462  630355 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:13.436486  630355 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:37:13.436995  630355 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:13.437815  630355 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:13.437836  630355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:37:13.437890  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:13.470222  630355 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:13.470333  630355 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:37:13.470460  630355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:13.470880  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:13.501499  630355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:13.522610  630355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 09:37:13.595670  630355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:13.601932  630355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:13.631849  630355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:13.821310  630355 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 09:37:14.024585  630355 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:37:14.024669  630355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:37:14.038954  630355 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 09:37:14.039897  630355 addons.go:530] duration metric: took 631.665184ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 09:37:14.044768  630355 api_server.go:72] duration metric: took 636.623759ms to wait for apiserver process to appear ...
	I1227 09:37:14.044806  630355 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:37:14.044828  630355 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:37:14.051156  630355 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 09:37:14.052113  630355 api_server.go:141] control plane version: v1.35.0
	I1227 09:37:14.052147  630355 api_server.go:131] duration metric: took 7.331844ms to wait for apiserver health ...
	I1227 09:37:14.052157  630355 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:37:14.055482  630355 system_pods.go:59] 8 kube-system pods found
	I1227 09:37:14.055525  630355 system_pods.go:61] "coredns-7d764666f9-kqzph" [cd4faccb-5994-46cb-a83b-d554df2fb8f2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 09:37:14.055546  630355 system_pods.go:61] "etcd-newest-cni-246956" [26721526-906a-4949-a50f-92ea210b80be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:14.055559  630355 system_pods.go:61] "kindnet-lmtxw" [e2185b04-5cba-4c54-86e0-9c2515f95074] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:37:14.055572  630355 system_pods.go:61] "kube-apiserver-newest-cni-246956" [7e3043fd-edc4-4182-8659-eba54f67a2d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:14.055582  630355 system_pods.go:61] "kube-controller-manager-newest-cni-246956" [7a30adc9-ce06-4908-8f0b-ed3da78f6394] Running
	I1227 09:37:14.055591  630355 system_pods.go:61] "kube-proxy-65ltj" [a1e5773a-e15f-405b-bca5-62a52d6e83a2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:37:14.055603  630355 system_pods.go:61] "kube-scheduler-newest-cni-246956" [e515cbde-415b-4a69-b0be-a4c87c86858e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:14.055620  630355 system_pods.go:61] "storage-provisioner" [0735bc86-6017-4c08-8562-4a36fe686929] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 09:37:14.055629  630355 system_pods.go:74] duration metric: took 3.463288ms to wait for pod list to return data ...
	I1227 09:37:14.055639  630355 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:37:14.058131  630355 default_sa.go:45] found service account: "default"
	I1227 09:37:14.058152  630355 default_sa.go:55] duration metric: took 2.506015ms for default service account to be created ...
	I1227 09:37:14.058166  630355 kubeadm.go:587] duration metric: took 650.02674ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 09:37:14.058190  630355 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:37:14.060558  630355 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:37:14.060580  630355 node_conditions.go:123] node cpu capacity is 8
	I1227 09:37:14.060600  630355 node_conditions.go:105] duration metric: took 2.400745ms to run NodePressure ...
	I1227 09:37:14.060615  630355 start.go:242] waiting for startup goroutines ...
	I1227 09:37:14.325652  630355 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-246956" context rescaled to 1 replicas
	I1227 09:37:14.325757  630355 start.go:247] waiting for cluster config update ...
	I1227 09:37:14.325781  630355 start.go:256] writing updated cluster config ...
	I1227 09:37:14.326218  630355 ssh_runner.go:195] Run: rm -f paused
	I1227 09:37:14.386984  630355 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:37:14.389355  630355 out.go:179] * Done! kubectl is now configured to use "newest-cni-246956" cluster and "default" namespace by default
	W1227 09:37:12.038861  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:14.039515  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 09:37:03 newest-cni-246956 crio[775]: time="2025-12-27T09:37:03.840103648Z" level=info msg="Started container" PID=1227 containerID=a90c2c83dc04390bf394bc666e08a3d41efe9dc162706683b3578d9d9bbee3c0 description=kube-system/kube-controller-manager-newest-cni-246956/kube-controller-manager id=bdcd3fab-8a3b-46f9-aa82-772ecda3278a name=/runtime.v1.RuntimeService/StartContainer sandboxID=a052f0ee05082bc3497df05f52981f3e90a909c3a314080ea1172aab9f121570
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.667767761Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-65ltj/POD" id=e3198ee8-fed9-4186-8ac8-b43143efa2a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.667896221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.671424974Z" level=info msg="Running pod sandbox: kube-system/kindnet-lmtxw/POD" id=15e85ead-2095-4c99-860f-3b6059776ed6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.671488649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.676954406Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e3198ee8-fed9-4186-8ac8-b43143efa2a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.677921013Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=15e85ead-2095-4c99-860f-3b6059776ed6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.679231501Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.680509903Z" level=info msg="Ran pod sandbox 1588c6f4ac5f6b04b99e3225726ff6e598eca3faadc38e45481516cac7cef2be with infra container: kube-system/kube-proxy-65ltj/POD" id=e3198ee8-fed9-4186-8ac8-b43143efa2a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.682387593Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=04eeb566-fcb0-4b11-85e1-d5ed81e16bcc name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.683104965Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.684075025Z" level=info msg="Ran pod sandbox 10cf475c3d519208f57071f79dbc2d0c4b2c53e749c6cfbdb8fc6748efb41900 with infra container: kube-system/kindnet-lmtxw/POD" id=15e85ead-2095-4c99-860f-3b6059776ed6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.684197992Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=35ac4865-6fbe-4326-95d9-ca39f41ac203 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.685828975Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=717a2949-cc65-43f3-9e9a-863977d0f581 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.686132266Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=717a2949-cc65-43f3-9e9a-863977d0f581 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.686537705Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=717a2949-cc65-43f3-9e9a-863977d0f581 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.687588795Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=ef7f0f05-e23e-44b1-a608-bfbaedcfdc95 name=/runtime.v1.ImageService/PullImage
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.689615614Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.697926292Z" level=info msg="Creating container: kube-system/kube-proxy-65ltj/kube-proxy" id=43a133dc-7b45-4663-8b01-0f872107420d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.698953886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.704244437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.705293441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.749775018Z" level=info msg="Created container c833d23613c1b7894c0429b71a6606d855676175011857c9c72113e4f98a89b0: kube-system/kube-proxy-65ltj/kube-proxy" id=43a133dc-7b45-4663-8b01-0f872107420d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.750856531Z" level=info msg="Starting container: c833d23613c1b7894c0429b71a6606d855676175011857c9c72113e4f98a89b0" id=2d14f064-d31e-4a7d-b281-6380162e001e name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:13 newest-cni-246956 crio[775]: time="2025-12-27T09:37:13.754407623Z" level=info msg="Started container" PID=1592 containerID=c833d23613c1b7894c0429b71a6606d855676175011857c9c72113e4f98a89b0 description=kube-system/kube-proxy-65ltj/kube-proxy id=2d14f064-d31e-4a7d-b281-6380162e001e name=/runtime.v1.RuntimeService/StartContainer sandboxID=1588c6f4ac5f6b04b99e3225726ff6e598eca3faadc38e45481516cac7cef2be
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c833d23613c1b       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   2 seconds ago       Running             kube-proxy                0                   1588c6f4ac5f6       kube-proxy-65ltj                            kube-system
	e76b1be920fae       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   12 seconds ago      Running             kube-scheduler            0                   ce6d94dac60d7       kube-scheduler-newest-cni-246956            kube-system
	a90c2c83dc043       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   12 seconds ago      Running             kube-controller-manager   0                   a052f0ee05082       kube-controller-manager-newest-cni-246956   kube-system
	fae0ee076405a       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   12 seconds ago      Running             etcd                      0                   a62a0c4e21f4c       etcd-newest-cni-246956                      kube-system
	91b10d5c42a02       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   12 seconds ago      Running             kube-apiserver            0                   32aff8e82a245       kube-apiserver-newest-cni-246956            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-246956
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-246956
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=newest-cni-246956
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_37_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:37:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-246956
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:37:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:37:07 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:37:07 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:37:07 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 09:37:07 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-246956
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                0282d3f4-6f31-42cc-85b2-77d015ffb093
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-246956                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9s
	  kube-system                 kindnet-lmtxw                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-246956             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-246956    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-proxy-65ltj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-246956             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-246956 event: Registered Node newest-cni-246956 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [fae0ee076405ae54c28f091f71d070d16ebc242fc3d4f2ea250813b217a6f918] <==
	{"level":"info","ts":"2025-12-27T09:37:03.898151Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T09:37:04.089207Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T09:37:04.089302Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T09:37:04.089460Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T09:37:04.089527Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:37:04.089590Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:04.090359Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:04.090443Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:37:04.090478Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:04.090494Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:04.091211Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-246956 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:37:04.091254Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:37:04.091477Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:37:04.091537Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:37:04.091934Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:37:04.091972Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:37:04.092612Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:37:04.093297Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:37:04.093015Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:37:04.095672Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:37:04.093482Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:37:04.095808Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T09:37:04.095912Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T09:37:04.096444Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:37:04.096718Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 09:37:16 up  1:19,  0 user,  load average: 3.03, 3.06, 2.33
	Linux newest-cni-246956 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [91b10d5c42a02613195fdeeb6c88f71ab0aee383051b2db64bb71f46d8c50311] <==
	E1227 09:37:05.532717       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1227 09:37:05.558512       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1227 09:37:05.605411       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:37:05.610551       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:37:05.610651       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 09:37:05.624560       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:37:05.736723       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:37:06.407932       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 09:37:06.412133       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 09:37:06.412151       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:37:06.900446       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:37:06.937294       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:37:07.010881       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 09:37:07.016248       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 09:37:07.017203       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:37:07.020945       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:37:07.433610       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:37:07.787440       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:37:07.797626       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 09:37:07.805294       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 09:37:12.888426       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:37:13.087587       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:37:13.092508       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:37:13.337719       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1227 09:37:13.337719       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a90c2c83dc04390bf394bc666e08a3d41efe9dc162706683b3578d9d9bbee3c0] <==
	I1227 09:37:12.247907       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.245203       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.240623       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.248104       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.252018       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.252867       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.253051       1 range_allocator.go:177] "Sending events to api server"
	I1227 09:37:12.253140       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 09:37:12.253148       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:12.253155       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.252859       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.254413       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.254656       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.254926       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.255529       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.252864       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.255586       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 09:37:12.256608       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-246956"
	I1227 09:37:12.256679       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 09:37:12.264139       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:12.265277       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-246956" podCIDRs=["10.42.0.0/24"]
	I1227 09:37:12.347525       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:12.347550       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:37:12.347557       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:37:12.368543       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [c833d23613c1b7894c0429b71a6606d855676175011857c9c72113e4f98a89b0] <==
	I1227 09:37:13.805394       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:37:13.871144       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:13.971970       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:13.972006       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 09:37:13.972091       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:37:13.997047       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:37:13.997119       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:37:14.004331       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:37:14.004834       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:37:14.004860       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:37:14.006654       1 config.go:200] "Starting service config controller"
	I1227 09:37:14.006666       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:37:14.008520       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:37:14.008543       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:37:14.008284       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:37:14.008922       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:37:14.008218       1 config.go:309] "Starting node config controller"
	I1227 09:37:14.012576       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:37:14.012593       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:37:14.107617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:37:14.109353       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 09:37:14.109378       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e76b1be920faea45108ee797d90567a1ffc31cd051b1b61c9386d488f24d6842] <==
	E1227 09:37:05.498255       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:37:05.498270       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:37:05.498310       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:37:05.498351       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:37:05.498422       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:37:05.498475       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:37:05.498539       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:37:05.498586       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:37:05.498086       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:37:05.498182       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:37:05.499250       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:37:05.499552       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:37:06.326551       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:37:06.343353       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:37:06.390395       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:37:06.422709       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:37:06.447199       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:37:06.459693       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:37:06.466578       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:37:06.567903       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:37:06.581696       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:37:06.607333       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:37:06.638914       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:37:06.648871       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1227 09:37:09.387479       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:37:08 newest-cni-246956 kubelet[1310]: E1227 09:37:08.673569    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-246956" containerName="kube-controller-manager"
	Dec 27 09:37:08 newest-cni-246956 kubelet[1310]: I1227 09:37:08.690944    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-246956" podStartSLOduration=1.6909243090000001 podStartE2EDuration="1.690924309s" podCreationTimestamp="2025-12-27 09:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:37:08.661823069 +0000 UTC m=+1.109244006" watchObservedRunningTime="2025-12-27 09:37:08.690924309 +0000 UTC m=+1.138345249"
	Dec 27 09:37:08 newest-cni-246956 kubelet[1310]: I1227 09:37:08.701054    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-246956" podStartSLOduration=2.701038591 podStartE2EDuration="2.701038591s" podCreationTimestamp="2025-12-27 09:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:37:08.700928554 +0000 UTC m=+1.148349492" watchObservedRunningTime="2025-12-27 09:37:08.701038591 +0000 UTC m=+1.148459541"
	Dec 27 09:37:08 newest-cni-246956 kubelet[1310]: I1227 09:37:08.701172    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-246956" podStartSLOduration=1.701163362 podStartE2EDuration="1.701163362s" podCreationTimestamp="2025-12-27 09:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:37:08.691335624 +0000 UTC m=+1.138756560" watchObservedRunningTime="2025-12-27 09:37:08.701163362 +0000 UTC m=+1.148584296"
	Dec 27 09:37:08 newest-cni-246956 kubelet[1310]: I1227 09:37:08.712129    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-246956" podStartSLOduration=1.7120822279999999 podStartE2EDuration="1.712082228s" podCreationTimestamp="2025-12-27 09:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 09:37:08.711601033 +0000 UTC m=+1.159021969" watchObservedRunningTime="2025-12-27 09:37:08.712082228 +0000 UTC m=+1.159503165"
	Dec 27 09:37:09 newest-cni-246956 kubelet[1310]: E1227 09:37:09.655567    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-246956" containerName="etcd"
	Dec 27 09:37:09 newest-cni-246956 kubelet[1310]: E1227 09:37:09.655996    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-246956" containerName="kube-scheduler"
	Dec 27 09:37:09 newest-cni-246956 kubelet[1310]: E1227 09:37:09.656540    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-246956" containerName="kube-apiserver"
	Dec 27 09:37:09 newest-cni-246956 kubelet[1310]: E1227 09:37:09.656862    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-246956" containerName="kube-controller-manager"
	Dec 27 09:37:10 newest-cni-246956 kubelet[1310]: E1227 09:37:10.657493    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-246956" containerName="etcd"
	Dec 27 09:37:10 newest-cni-246956 kubelet[1310]: E1227 09:37:10.657680    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-246956" containerName="kube-scheduler"
	Dec 27 09:37:11 newest-cni-246956 kubelet[1310]: E1227 09:37:11.659264    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-246956" containerName="kube-scheduler"
	Dec 27 09:37:12 newest-cni-246956 kubelet[1310]: I1227 09:37:12.271105    1310 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 09:37:12 newest-cni-246956 kubelet[1310]: I1227 09:37:12.271977    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 09:37:13 newest-cni-246956 kubelet[1310]: I1227 09:37:13.461208    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2185b04-5cba-4c54-86e0-9c2515f95074-lib-modules\") pod \"kindnet-lmtxw\" (UID: \"e2185b04-5cba-4c54-86e0-9c2515f95074\") " pod="kube-system/kindnet-lmtxw"
	Dec 27 09:37:13 newest-cni-246956 kubelet[1310]: I1227 09:37:13.461951    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a1e5773a-e15f-405b-bca5-62a52d6e83a2-kube-proxy\") pod \"kube-proxy-65ltj\" (UID: \"a1e5773a-e15f-405b-bca5-62a52d6e83a2\") " pod="kube-system/kube-proxy-65ltj"
	Dec 27 09:37:13 newest-cni-246956 kubelet[1310]: I1227 09:37:13.462018    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e2185b04-5cba-4c54-86e0-9c2515f95074-cni-cfg\") pod \"kindnet-lmtxw\" (UID: \"e2185b04-5cba-4c54-86e0-9c2515f95074\") " pod="kube-system/kindnet-lmtxw"
	Dec 27 09:37:13 newest-cni-246956 kubelet[1310]: I1227 09:37:13.462046    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5w4b\" (UniqueName: \"kubernetes.io/projected/e2185b04-5cba-4c54-86e0-9c2515f95074-kube-api-access-p5w4b\") pod \"kindnet-lmtxw\" (UID: \"e2185b04-5cba-4c54-86e0-9c2515f95074\") " pod="kube-system/kindnet-lmtxw"
	Dec 27 09:37:13 newest-cni-246956 kubelet[1310]: I1227 09:37:13.462072    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1e5773a-e15f-405b-bca5-62a52d6e83a2-xtables-lock\") pod \"kube-proxy-65ltj\" (UID: \"a1e5773a-e15f-405b-bca5-62a52d6e83a2\") " pod="kube-system/kube-proxy-65ltj"
	Dec 27 09:37:13 newest-cni-246956 kubelet[1310]: I1227 09:37:13.462092    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1e5773a-e15f-405b-bca5-62a52d6e83a2-lib-modules\") pod \"kube-proxy-65ltj\" (UID: \"a1e5773a-e15f-405b-bca5-62a52d6e83a2\") " pod="kube-system/kube-proxy-65ltj"
	Dec 27 09:37:13 newest-cni-246956 kubelet[1310]: I1227 09:37:13.462112    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trkj7\" (UniqueName: \"kubernetes.io/projected/a1e5773a-e15f-405b-bca5-62a52d6e83a2-kube-api-access-trkj7\") pod \"kube-proxy-65ltj\" (UID: \"a1e5773a-e15f-405b-bca5-62a52d6e83a2\") " pod="kube-system/kube-proxy-65ltj"
	Dec 27 09:37:13 newest-cni-246956 kubelet[1310]: I1227 09:37:13.462138    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2185b04-5cba-4c54-86e0-9c2515f95074-xtables-lock\") pod \"kindnet-lmtxw\" (UID: \"e2185b04-5cba-4c54-86e0-9c2515f95074\") " pod="kube-system/kindnet-lmtxw"
	Dec 27 09:37:13 newest-cni-246956 kubelet[1310]: E1227 09:37:13.502949    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-246956" containerName="kube-apiserver"
	Dec 27 09:37:13 newest-cni-246956 kubelet[1310]: E1227 09:37:13.627149    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-246956" containerName="kube-scheduler"
	Dec 27 09:37:14 newest-cni-246956 kubelet[1310]: E1227 09:37:14.646651    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-246956" containerName="kube-controller-manager"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-246956 -n newest-cni-246956
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-246956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-kqzph storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-246956 describe pod coredns-7d764666f9-kqzph storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-246956 describe pod coredns-7d764666f9-kqzph storage-provisioner: exit status 1 (83.684098ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-kqzph" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-246956 describe pod coredns-7d764666f9-kqzph storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-246956 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-246956 --alsologtostderr -v=1: exit status 80 (2.097818774s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-246956 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:37:38.172388  645064 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:38.172642  645064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:38.172651  645064 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:38.172655  645064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:38.172870  645064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:37:38.173122  645064 out.go:368] Setting JSON to false
	I1227 09:37:38.173140  645064 mustload.go:66] Loading cluster: newest-cni-246956
	I1227 09:37:38.173466  645064 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:38.173937  645064 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:38.195315  645064 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:38.195630  645064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:38.264474  645064 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-27 09:37:38.250989833 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:38.265349  645064 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-246956 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 09:37:38.267279  645064 out.go:179] * Pausing node newest-cni-246956 ... 
	I1227 09:37:38.268287  645064 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:38.268676  645064 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:38.268783  645064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:38.299378  645064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:38.391739  645064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:38.407898  645064 pause.go:52] kubelet running: true
	I1227 09:37:38.407989  645064 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:38.547500  645064 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:38.547586  645064 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:38.616002  645064 cri.go:96] found id: "90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522"
	I1227 09:37:38.616024  645064 cri.go:96] found id: "af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95"
	I1227 09:37:38.616028  645064 cri.go:96] found id: "2e65f48ec568aa4a15024b2eea0baac4a75982b1ef72891857cb7ece1b229a93"
	I1227 09:37:38.616031  645064 cri.go:96] found id: "f2b4d1b51b8b5d05a7022c93ea1d77fcf987df7336ffa16aa507218f305194b7"
	I1227 09:37:38.616034  645064 cri.go:96] found id: "5263ceb230522a69b34eb816b6de3b89096d950d9862e603732cbf4a1c75836e"
	I1227 09:37:38.616037  645064 cri.go:96] found id: "270f3f6fdb6aa9105489c1aa87a857c489190a97d3b5dadc4b9357252b50af3a"
	I1227 09:37:38.616040  645064 cri.go:96] found id: ""
	I1227 09:37:38.616078  645064 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:38.629941  645064 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:38Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:37:38.812344  645064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:38.828166  645064 pause.go:52] kubelet running: false
	I1227 09:37:38.828215  645064 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:38.954714  645064 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:38.954826  645064 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:39.034423  645064 cri.go:96] found id: "90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522"
	I1227 09:37:39.034673  645064 cri.go:96] found id: "af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95"
	I1227 09:37:39.034716  645064 cri.go:96] found id: "2e65f48ec568aa4a15024b2eea0baac4a75982b1ef72891857cb7ece1b229a93"
	I1227 09:37:39.034751  645064 cri.go:96] found id: "f2b4d1b51b8b5d05a7022c93ea1d77fcf987df7336ffa16aa507218f305194b7"
	I1227 09:37:39.034766  645064 cri.go:96] found id: "5263ceb230522a69b34eb816b6de3b89096d950d9862e603732cbf4a1c75836e"
	I1227 09:37:39.034964  645064 cri.go:96] found id: "270f3f6fdb6aa9105489c1aa87a857c489190a97d3b5dadc4b9357252b50af3a"
	I1227 09:37:39.035005  645064 cri.go:96] found id: ""
	I1227 09:37:39.035251  645064 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:39.420125  645064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:39.434650  645064 pause.go:52] kubelet running: false
	I1227 09:37:39.434722  645064 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:39.557374  645064 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:39.557445  645064 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:39.627537  645064 cri.go:96] found id: "90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522"
	I1227 09:37:39.627559  645064 cri.go:96] found id: "af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95"
	I1227 09:37:39.627563  645064 cri.go:96] found id: "2e65f48ec568aa4a15024b2eea0baac4a75982b1ef72891857cb7ece1b229a93"
	I1227 09:37:39.627579  645064 cri.go:96] found id: "f2b4d1b51b8b5d05a7022c93ea1d77fcf987df7336ffa16aa507218f305194b7"
	I1227 09:37:39.627584  645064 cri.go:96] found id: "5263ceb230522a69b34eb816b6de3b89096d950d9862e603732cbf4a1c75836e"
	I1227 09:37:39.627589  645064 cri.go:96] found id: "270f3f6fdb6aa9105489c1aa87a857c489190a97d3b5dadc4b9357252b50af3a"
	I1227 09:37:39.627594  645064 cri.go:96] found id: ""
	I1227 09:37:39.627645  645064 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:39.993561  645064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:40.006392  645064 pause.go:52] kubelet running: false
	I1227 09:37:40.006461  645064 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:40.121800  645064 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:40.121878  645064 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:40.189896  645064 cri.go:96] found id: "90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522"
	I1227 09:37:40.189918  645064 cri.go:96] found id: "af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95"
	I1227 09:37:40.189921  645064 cri.go:96] found id: "2e65f48ec568aa4a15024b2eea0baac4a75982b1ef72891857cb7ece1b229a93"
	I1227 09:37:40.189925  645064 cri.go:96] found id: "f2b4d1b51b8b5d05a7022c93ea1d77fcf987df7336ffa16aa507218f305194b7"
	I1227 09:37:40.189927  645064 cri.go:96] found id: "5263ceb230522a69b34eb816b6de3b89096d950d9862e603732cbf4a1c75836e"
	I1227 09:37:40.189931  645064 cri.go:96] found id: "270f3f6fdb6aa9105489c1aa87a857c489190a97d3b5dadc4b9357252b50af3a"
	I1227 09:37:40.189934  645064 cri.go:96] found id: ""
	I1227 09:37:40.189986  645064 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:40.203143  645064 out.go:203] 
	W1227 09:37:40.204048  645064 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:37:40.204065  645064 out.go:285] * 
	* 
	W1227 09:37:40.206327  645064 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:37:40.207301  645064 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-246956 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-246956
helpers_test.go:244: (dbg) docker inspect newest-cni-246956:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b",
	        "Created": "2025-12-27T09:36:55.867553755Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 641826,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:37:27.61803496Z",
	            "FinishedAt": "2025-12-27T09:37:26.333746866Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/hostname",
	        "HostsPath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/hosts",
	        "LogPath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b-json.log",
	        "Name": "/newest-cni-246956",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-246956:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-246956",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b",
	                "LowerDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-246956",
	                "Source": "/var/lib/docker/volumes/newest-cni-246956/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-246956",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-246956",
	                "name.minikube.sigs.k8s.io": "newest-cni-246956",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "80814fc4ffd756c64bc6afc543b1698a6355b1ce5c6eb3c2d1b8bb82df0bf57d",
	            "SandboxKey": "/var/run/docker/netns/80814fc4ffd7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-246956": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cdd553080fb08dffc74b490be6da4ecef89ab2e05674ec3b26e123185e152840",
	                    "EndpointID": "5376213e84fdc700e2cad491390ea11fa6f0287948a6e68e7d0ebe44714ce84d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "36:2c:8b:35:b7:fc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-246956",
	                        "69aebd25b47b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-246956 -n newest-cni-246956
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-246956 -n newest-cni-246956: exit status 2 (351.115947ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-246956 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p no-preload-963457 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-497722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-497722 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ image   │ old-k8s-version-094398 image list --format=json                                                                                                                                                                                               │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ pause   │ -p old-k8s-version-094398 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p no-preload-963457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-497722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ image   │ embed-certs-912564 image list --format=json                                                                                                                                                                                                   │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p embed-certs-912564 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-246956 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p auto-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-157923                  │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-246956 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ image   │ newest-cni-246956 image list --format=json                                                                                                                                                                                                    │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p newest-cni-246956 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:37:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:37:27.378899  641411 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:27.379011  641411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:27.379016  641411 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:27.379021  641411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:27.379238  641411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:37:27.379675  641411 out.go:368] Setting JSON to false
	I1227 09:37:27.380937  641411 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4791,"bootTime":1766823456,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:37:27.380997  641411 start.go:143] virtualization: kvm guest
	I1227 09:37:27.382516  641411 out.go:179] * [newest-cni-246956] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:37:27.383534  641411 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:37:27.383557  641411 notify.go:221] Checking for updates...
	I1227 09:37:27.385354  641411 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:37:27.386240  641411 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:27.387079  641411 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:37:27.391187  641411 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:37:27.392096  641411 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:37:27.393369  641411 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:27.394370  641411 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:37:27.421776  641411 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:37:27.421886  641411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:27.481486  641411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-27 09:37:27.470853214 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:27.481650  641411 docker.go:319] overlay module found
	I1227 09:37:27.483174  641411 out.go:179] * Using the docker driver based on existing profile
	I1227 09:37:27.484194  641411 start.go:309] selected driver: docker
	I1227 09:37:27.484211  641411 start.go:928] validating driver "docker" against &{Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:27.484329  641411 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:37:27.485149  641411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:27.541981  641411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-27 09:37:27.531020677 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:27.542283  641411 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 09:37:27.542328  641411 cni.go:84] Creating CNI manager for ""
	I1227 09:37:27.542404  641411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:27.542469  641411 start.go:353] cluster config:
	{Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:27.544837  641411 out.go:179] * Starting "newest-cni-246956" primary control-plane node in "newest-cni-246956" cluster
	I1227 09:37:27.545721  641411 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:37:27.546820  641411 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:37:27.547908  641411 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:27.547944  641411 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:37:27.547954  641411 cache.go:65] Caching tarball of preloaded images
	I1227 09:37:27.548011  641411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:37:27.548043  641411 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:37:27.548053  641411 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:37:27.548175  641411 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/config.json ...
	I1227 09:37:27.567870  641411 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:37:27.567891  641411 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:37:27.567905  641411 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:37:27.567949  641411 start.go:360] acquireMachinesLock for newest-cni-246956: {Name:mkce071e540487b97cbc77937d99e9ae86cc89ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:37:27.568004  641411 start.go:364] duration metric: took 37.764µs to acquireMachinesLock for "newest-cni-246956"
	I1227 09:37:27.568022  641411 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:37:27.568028  641411 fix.go:54] fixHost starting: 
	I1227 09:37:27.568299  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:27.590667  641411 fix.go:112] recreateIfNeeded on newest-cni-246956: state=Stopped err=<nil>
	W1227 09:37:27.590698  641411 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:37:23.593860  640477 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:37:23.594049  640477 start.go:159] libmachine.API.Create for "auto-157923" (driver="docker")
	I1227 09:37:23.594077  640477 client.go:173] LocalClient.Create starting
	I1227 09:37:23.594137  640477 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:37:23.594167  640477 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:23.594185  640477 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:23.594243  640477 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:37:23.594262  640477 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:23.594273  640477 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:23.594578  640477 cli_runner.go:164] Run: docker network inspect auto-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:37:23.610819  640477 cli_runner.go:211] docker network inspect auto-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:37:23.610896  640477 network_create.go:284] running [docker network inspect auto-157923] to gather additional debugging logs...
	I1227 09:37:23.610918  640477 cli_runner.go:164] Run: docker network inspect auto-157923
	W1227 09:37:23.627099  640477 cli_runner.go:211] docker network inspect auto-157923 returned with exit code 1
	I1227 09:37:23.627123  640477 network_create.go:287] error running [docker network inspect auto-157923]: docker network inspect auto-157923: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-157923 not found
	I1227 09:37:23.627134  640477 network_create.go:289] output of [docker network inspect auto-157923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-157923 not found
	
	** /stderr **
	I1227 09:37:23.627260  640477 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:23.644238  640477 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
	I1227 09:37:23.645188  640477 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-21a699476be6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:e8:d9:95:e6:36} reservation:<nil>}
	I1227 09:37:23.645807  640477 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e97c5356905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:d9:6b:42:f5:e3} reservation:<nil>}
	I1227 09:37:23.646472  640477 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cdd553080fb0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:73:29:56:1f:e3} reservation:<nil>}
	I1227 09:37:23.647038  640477 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e27fce9ec482 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:8a:0d:7f:bc:3d} reservation:<nil>}
	I1227 09:37:23.647836  640477 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001df2770}
	I1227 09:37:23.647858  640477 network_create.go:124] attempt to create docker network auto-157923 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1227 09:37:23.647897  640477 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-157923 auto-157923
	I1227 09:37:23.696556  640477 network_create.go:108] docker network auto-157923 192.168.94.0/24 created
	I1227 09:37:23.696603  640477 kic.go:121] calculated static IP "192.168.94.2" for the "auto-157923" container
	I1227 09:37:23.696690  640477 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:37:23.713970  640477 cli_runner.go:164] Run: docker volume create auto-157923 --label name.minikube.sigs.k8s.io=auto-157923 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:37:23.732094  640477 oci.go:103] Successfully created a docker volume auto-157923
	I1227 09:37:23.732173  640477 cli_runner.go:164] Run: docker run --rm --name auto-157923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-157923 --entrypoint /usr/bin/test -v auto-157923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:37:24.133562  640477 oci.go:107] Successfully prepared a docker volume auto-157923
	I1227 09:37:24.133644  640477 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:24.133660  640477 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:37:24.133725  640477 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:37:27.070484  640477 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (2.936698922s)
	I1227 09:37:27.070519  640477 kic.go:203] duration metric: took 2.936854305s to extract preloaded images to volume ...
	W1227 09:37:27.070624  640477 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:37:27.070668  640477 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:37:27.070719  640477 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:37:27.125901  640477 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-157923 --name auto-157923 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-157923 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-157923 --network auto-157923 --ip 192.168.94.2 --volume auto-157923:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:37:27.415302  640477 cli_runner.go:164] Run: docker container inspect auto-157923 --format={{.State.Running}}
	I1227 09:37:27.434776  640477 cli_runner.go:164] Run: docker container inspect auto-157923 --format={{.State.Status}}
	I1227 09:37:27.457848  640477 cli_runner.go:164] Run: docker exec auto-157923 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:37:27.504821  640477 oci.go:144] the created container "auto-157923" has a running status.
	I1227 09:37:27.504859  640477 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa...
	I1227 09:37:27.658588  640477 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:37:27.687371  640477 cli_runner.go:164] Run: docker container inspect auto-157923 --format={{.State.Status}}
	I1227 09:37:27.712940  640477 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:37:27.712967  640477 kic_runner.go:114] Args: [docker exec --privileged auto-157923 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:37:27.773647  640477 cli_runner.go:164] Run: docker container inspect auto-157923 --format={{.State.Status}}
	I1227 09:37:27.796172  640477 machine.go:94] provisionDockerMachine start ...
	I1227 09:37:27.796272  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:27.818454  640477 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:27.818781  640477 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1227 09:37:27.818812  640477 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:37:27.956997  640477 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-157923
	
	I1227 09:37:27.957026  640477 ubuntu.go:182] provisioning hostname "auto-157923"
	I1227 09:37:27.957086  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:27.976063  640477 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:27.976373  640477 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1227 09:37:27.976395  640477 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-157923 && echo "auto-157923" | sudo tee /etc/hostname
	I1227 09:37:28.114313  640477 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-157923
	
	I1227 09:37:28.114408  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:28.139306  640477 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:28.139654  640477 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1227 09:37:28.139678  640477 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-157923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-157923/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-157923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:37:28.269032  640477 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:37:28.269063  640477 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:37:28.269094  640477 ubuntu.go:190] setting up certificates
	I1227 09:37:28.269116  640477 provision.go:84] configureAuth start
	I1227 09:37:28.269181  640477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-157923
	I1227 09:37:28.285950  640477 provision.go:143] copyHostCerts
	I1227 09:37:28.286010  640477 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:37:28.286026  640477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:37:28.286104  640477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:37:28.286245  640477 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:37:28.286258  640477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:37:28.286301  640477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:37:28.286414  640477 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:37:28.286426  640477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:37:28.286460  640477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:37:28.286554  640477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.auto-157923 san=[127.0.0.1 192.168.94.2 auto-157923 localhost minikube]
	W1227 09:37:25.538466  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:28.038757  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	I1227 09:37:28.443502  640477 provision.go:177] copyRemoteCerts
	I1227 09:37:28.443559  640477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:37:28.443593  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:28.461329  640477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa Username:docker}
	I1227 09:37:28.550382  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:37:28.568972  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1227 09:37:28.585729  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:37:28.602155  640477 provision.go:87] duration metric: took 333.013948ms to configureAuth
	I1227 09:37:28.602182  640477 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:37:28.602341  640477 config.go:182] Loaded profile config "auto-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:28.602449  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:28.619541  640477 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:28.619769  640477 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1227 09:37:28.619806  640477 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:37:28.883754  640477 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:37:28.883780  640477 machine.go:97] duration metric: took 1.087580783s to provisionDockerMachine
	I1227 09:37:28.883811  640477 client.go:176] duration metric: took 5.289706483s to LocalClient.Create
	I1227 09:37:28.883834  640477 start.go:167] duration metric: took 5.289784753s to libmachine.API.Create "auto-157923"
	I1227 09:37:28.883844  640477 start.go:293] postStartSetup for "auto-157923" (driver="docker")
	I1227 09:37:28.883856  640477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:37:28.883915  640477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:37:28.883952  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:28.901443  640477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa Username:docker}
	I1227 09:37:28.992208  640477 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:37:28.995626  640477 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:37:28.995659  640477 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:37:28.995674  640477 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:37:28.995736  640477 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:37:28.995857  640477 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:37:28.995999  640477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:37:29.003499  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:29.022508  640477 start.go:296] duration metric: took 138.650587ms for postStartSetup
	I1227 09:37:29.022880  640477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-157923
	I1227 09:37:29.041193  640477 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/config.json ...
	I1227 09:37:29.041452  640477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:37:29.041503  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:29.058225  640477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa Username:docker}
	I1227 09:37:29.146558  640477 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:37:29.150983  640477 start.go:128] duration metric: took 5.558498758s to createHost
	I1227 09:37:29.151006  640477 start.go:83] releasing machines lock for "auto-157923", held for 5.55862422s
	I1227 09:37:29.151081  640477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-157923
	I1227 09:37:29.168302  640477 ssh_runner.go:195] Run: cat /version.json
	I1227 09:37:29.168357  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:29.168399  640477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:37:29.168475  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:29.185648  640477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa Username:docker}
	I1227 09:37:29.187085  640477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa Username:docker}
	I1227 09:37:29.327329  640477 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:29.333380  640477 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:37:29.367030  640477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:37:29.371491  640477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:37:29.371555  640477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:37:29.396934  640477 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 09:37:29.396955  640477 start.go:496] detecting cgroup driver to use...
	I1227 09:37:29.396991  640477 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:37:29.397037  640477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:37:29.413011  640477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:37:29.424413  640477 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:37:29.424469  640477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:37:29.440514  640477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:37:29.456555  640477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:37:29.535212  640477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:37:29.621192  640477 docker.go:234] disabling docker service ...
	I1227 09:37:29.621266  640477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:37:29.639272  640477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:37:29.651656  640477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:37:29.729754  640477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:37:29.811800  640477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:37:29.823537  640477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:37:29.837297  640477 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:37:29.837350  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.846843  640477 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:37:29.846891  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.855217  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.863186  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.871466  640477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:37:29.879936  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.888174  640477 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.901122  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.909263  640477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:37:29.916108  640477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:37:29.923007  640477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:29.999546  640477 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:37:30.136737  640477 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:37:30.136827  640477 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:37:30.140540  640477 start.go:574] Will wait 60s for crictl version
	I1227 09:37:30.140584  640477 ssh_runner.go:195] Run: which crictl
	I1227 09:37:30.144067  640477 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:37:30.169977  640477 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:37:30.170059  640477 ssh_runner.go:195] Run: crio --version
	I1227 09:37:30.197410  640477 ssh_runner.go:195] Run: crio --version
	I1227 09:37:30.225050  640477 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	W1227 09:37:26.875731  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	W1227 09:37:29.375572  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	I1227 09:37:30.225982  640477 cli_runner.go:164] Run: docker network inspect auto-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:30.243309  640477 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1227 09:37:30.247353  640477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:30.257557  640477 kubeadm.go:884] updating cluster {Name:auto-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:37:30.257694  640477 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:30.257737  640477 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:30.289512  640477 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:30.289537  640477 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:37:30.289592  640477 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:30.314579  640477 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:30.314601  640477 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:37:30.314608  640477 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1227 09:37:30.314692  640477 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-157923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:37:30.314755  640477 ssh_runner.go:195] Run: crio config
	I1227 09:37:30.360105  640477 cni.go:84] Creating CNI manager for ""
	I1227 09:37:30.360127  640477 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:30.360145  640477 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:37:30.360168  640477 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-157923 NodeName:auto-157923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:37:30.360293  640477 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-157923"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:37:30.360351  640477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:37:30.368460  640477 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:37:30.368522  640477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:37:30.376914  640477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1227 09:37:30.389564  640477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:37:30.404731  640477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1227 09:37:30.416866  640477 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:37:30.420364  640477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:30.429591  640477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:30.510148  640477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:30.536573  640477 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923 for IP: 192.168.94.2
	I1227 09:37:30.536596  640477 certs.go:195] generating shared ca certs ...
	I1227 09:37:30.536615  640477 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.536773  640477 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:37:30.536865  640477 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:37:30.536888  640477 certs.go:257] generating profile certs ...
	I1227 09:37:30.536964  640477 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.key
	I1227 09:37:30.536991  640477 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.crt with IP's: []
	I1227 09:37:30.624890  640477 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.crt ...
	I1227 09:37:30.624917  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.crt: {Name:mk5fec5dc889a050ff07ef9dc8a0ee9dc572cec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.625076  640477 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.key ...
	I1227 09:37:30.625088  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.key: {Name:mk88d35ccf30cb328ea28c0bdbaa05748553964d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.625165  640477 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key.abc95ca7
	I1227 09:37:30.625180  640477 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt.abc95ca7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1227 09:37:30.795196  640477 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt.abc95ca7 ...
	I1227 09:37:30.795236  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt.abc95ca7: {Name:mk35abb026792aa312a3fafcde9a2d75bf696072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.795436  640477 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key.abc95ca7 ...
	I1227 09:37:30.795455  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key.abc95ca7: {Name:mkb5743db85c3505685ca0680323d5bc1c7a1b98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.795567  640477 certs.go:382] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt.abc95ca7 -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt
	I1227 09:37:30.795652  640477 certs.go:386] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key.abc95ca7 -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key
	I1227 09:37:30.795708  640477 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.key
	I1227 09:37:30.795723  640477 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.crt with IP's: []
	I1227 09:37:30.998354  640477 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.crt ...
	I1227 09:37:30.998391  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.crt: {Name:mk8e3feffc991877f9962038e1715d34d9322f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.998591  640477 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.key ...
	I1227 09:37:30.998610  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.key: {Name:mkbad445ba173273276d15e93d1c494f347429b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.998863  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:30.998921  640477 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:30.998938  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:30.998972  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:30.999005  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:30.999038  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:30.999092  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:30.999815  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:31.021838  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:31.041056  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:31.057858  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:31.075048  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1227 09:37:31.093905  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:31.112516  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:31.130323  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:37:31.149098  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:31.168329  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:31.186296  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:31.203661  640477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:31.216953  640477 ssh_runner.go:195] Run: openssl version
	I1227 09:37:31.223419  640477 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:31.231562  640477 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:31.241117  640477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:31.245586  640477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:31.245653  640477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:31.285920  640477 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:31.293932  640477 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:37:31.302905  640477 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:31.310561  640477 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:31.318173  640477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:31.322222  640477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:31.322277  640477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:31.356462  640477 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:31.364014  640477 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/377171.pem /etc/ssl/certs/51391683.0
	I1227 09:37:31.371126  640477 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:31.378844  640477 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:31.386398  640477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:31.390084  640477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:31.390139  640477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:31.428052  640477 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:31.435848  640477 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3771712.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:31.443922  640477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:31.447673  640477 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:37:31.447725  640477 kubeadm.go:401] StartCluster: {Name:auto-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:31.447828  640477 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:31.447871  640477 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:31.475207  640477 cri.go:96] found id: ""
	I1227 09:37:31.475275  640477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:31.483398  640477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:37:31.491119  640477 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:37:31.491165  640477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:37:31.498704  640477 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:37:31.498718  640477 kubeadm.go:158] found existing configuration files:
	
	I1227 09:37:31.498751  640477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:37:31.506264  640477 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:37:31.506312  640477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:37:31.513506  640477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:37:31.520829  640477 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:37:31.520868  640477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:37:31.527975  640477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:37:31.536106  640477 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:37:31.536155  640477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:37:31.544285  640477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:37:31.552388  640477 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:37:31.552438  640477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:37:31.559610  640477 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:37:31.597300  640477 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:37:31.597363  640477 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:37:31.666942  640477 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:37:31.667040  640477 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 09:37:31.667077  640477 kubeadm.go:319] OS: Linux
	I1227 09:37:31.667155  640477 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:37:31.667269  640477 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:37:31.667358  640477 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:37:31.667426  640477 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:37:31.667509  640477 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:37:31.667580  640477 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:37:31.667657  640477 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:37:31.667719  640477 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 09:37:31.728917  640477 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:37:31.729067  640477 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:37:31.729228  640477 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:37:31.737418  640477 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:37:27.592142  641411 out.go:252] * Restarting existing docker container for "newest-cni-246956" ...
	I1227 09:37:27.592203  641411 cli_runner.go:164] Run: docker start newest-cni-246956
	I1227 09:37:27.886738  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:27.907223  641411 kic.go:430] container "newest-cni-246956" state is running.
	I1227 09:37:27.907710  641411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:37:27.928177  641411 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/config.json ...
	I1227 09:37:27.928467  641411 machine.go:94] provisionDockerMachine start ...
	I1227 09:37:27.928564  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:27.948875  641411 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:27.949165  641411 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1227 09:37:27.949178  641411 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:37:27.949816  641411 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58996->127.0.0.1:33478: read: connection reset by peer
	I1227 09:37:31.080379  641411 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-246956
	
	I1227 09:37:31.080410  641411 ubuntu.go:182] provisioning hostname "newest-cni-246956"
	I1227 09:37:31.080477  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:31.099049  641411 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:31.099379  641411 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1227 09:37:31.099406  641411 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-246956 && echo "newest-cni-246956" | sudo tee /etc/hostname
	I1227 09:37:31.234303  641411 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-246956
	
	I1227 09:37:31.234383  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:31.256173  641411 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:31.256398  641411 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1227 09:37:31.256429  641411 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-246956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-246956/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-246956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:37:31.384140  641411 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:37:31.384172  641411 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:37:31.384227  641411 ubuntu.go:190] setting up certificates
	I1227 09:37:31.384241  641411 provision.go:84] configureAuth start
	I1227 09:37:31.384296  641411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:37:31.404607  641411 provision.go:143] copyHostCerts
	I1227 09:37:31.404662  641411 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:37:31.404678  641411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:37:31.404741  641411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:37:31.404876  641411 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:37:31.404888  641411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:37:31.404921  641411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:37:31.404982  641411 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:37:31.404990  641411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:37:31.405014  641411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:37:31.405069  641411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.newest-cni-246956 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-246956]
	I1227 09:37:31.529908  641411 provision.go:177] copyRemoteCerts
	I1227 09:37:31.529965  641411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:37:31.530016  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:31.548471  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:31.643235  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:37:31.662940  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 09:37:31.681581  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:37:31.699489  641411 provision.go:87] duration metric: took 315.224154ms to configureAuth
	I1227 09:37:31.699512  641411 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:37:31.699705  641411 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:31.699846  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:31.720243  641411 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:31.720532  641411 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1227 09:37:31.720551  641411 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:37:32.003127  641411 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:37:32.003152  641411 machine.go:97] duration metric: took 4.074663584s to provisionDockerMachine
	I1227 09:37:32.003168  641411 start.go:293] postStartSetup for "newest-cni-246956" (driver="docker")
	I1227 09:37:32.003182  641411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:37:32.003242  641411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:37:32.003288  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:32.023832  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:32.116325  641411 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:37:32.119772  641411 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:37:32.119814  641411 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:37:32.119828  641411 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:37:32.119880  641411 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:37:32.119961  641411 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:37:32.120060  641411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:37:32.127840  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:32.145199  641411 start.go:296] duration metric: took 142.013746ms for postStartSetup
	I1227 09:37:32.145294  641411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:37:32.145339  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:32.164659  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:32.252473  641411 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:37:32.257294  641411 fix.go:56] duration metric: took 4.68926s for fixHost
	I1227 09:37:32.257322  641411 start.go:83] releasing machines lock for "newest-cni-246956", held for 4.689308555s
	I1227 09:37:32.257385  641411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:37:32.275910  641411 ssh_runner.go:195] Run: cat /version.json
	I1227 09:37:32.275959  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:32.276004  641411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:37:32.276088  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:32.295004  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:32.295362  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:32.442235  641411 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:32.448779  641411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:37:32.483613  641411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:37:32.488246  641411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:37:32.488302  641411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:37:32.496138  641411 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:37:32.496164  641411 start.go:496] detecting cgroup driver to use...
	I1227 09:37:32.496193  641411 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:37:32.496227  641411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:37:32.510039  641411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:37:32.521549  641411 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:37:32.521624  641411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:37:32.536748  641411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:37:32.549010  641411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:37:32.628232  641411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:37:32.707229  641411 docker.go:234] disabling docker service ...
	I1227 09:37:32.707308  641411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:37:32.723133  641411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:37:32.735630  641411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:37:32.826300  641411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:37:32.909512  641411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:37:32.921687  641411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:37:32.935959  641411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:37:32.936011  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.945093  641411 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:37:32.945146  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.954219  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.962484  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.970616  641411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:37:32.978185  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.986920  641411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.994668  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:33.002741  641411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:37:33.009746  641411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:37:33.016488  641411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:33.094917  641411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:37:33.241717  641411 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:37:33.241836  641411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:37:33.245878  641411 start.go:574] Will wait 60s for crictl version
	I1227 09:37:33.245919  641411 ssh_runner.go:195] Run: which crictl
	I1227 09:37:33.249536  641411 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:37:33.272986  641411 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:37:33.273053  641411 ssh_runner.go:195] Run: crio --version
	I1227 09:37:33.301152  641411 ssh_runner.go:195] Run: crio --version
	I1227 09:37:33.330066  641411 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:37:33.331033  641411 cli_runner.go:164] Run: docker network inspect newest-cni-246956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:33.348463  641411 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:37:33.352489  641411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:33.363652  641411 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 09:37:31.739982  640477 out.go:252]   - Generating certificates and keys ...
	I1227 09:37:31.740085  640477 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:37:31.740186  640477 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:37:31.813670  640477 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:37:31.844564  640477 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:37:31.919681  640477 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:37:32.158345  640477 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:37:32.193433  640477 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:37:32.193573  640477 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-157923 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1227 09:37:32.256805  640477 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:37:32.256960  640477 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-157923 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1227 09:37:32.314748  640477 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:37:32.437985  640477 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:37:32.603035  640477 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:37:32.603107  640477 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:37:32.828246  640477 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:37:32.945526  640477 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:37:33.166491  640477 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:37:33.306335  640477 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:37:33.465489  640477 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:37:33.466384  640477 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:37:33.473319  640477 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:37:33.364464  641411 kubeadm.go:884] updating cluster {Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:37:33.364588  641411 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:33.364631  641411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:33.399571  641411 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:33.399600  641411 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:37:33.399654  641411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:33.424967  641411 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:33.424990  641411 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:37:33.425007  641411 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 09:37:33.425107  641411 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-246956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:37:33.425190  641411 ssh_runner.go:195] Run: crio config
	I1227 09:37:33.481692  641411 cni.go:84] Creating CNI manager for ""
	I1227 09:37:33.481718  641411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:33.481739  641411 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 09:37:33.481784  641411 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-246956 NodeName:newest-cni-246956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:37:33.481963  641411 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-246956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:37:33.482037  641411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:37:33.491243  641411 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:37:33.491305  641411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:37:33.499311  641411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 09:37:33.512666  641411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:37:33.524552  641411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1227 09:37:33.543595  641411 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:37:33.547713  641411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:33.558045  641411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:33.655308  641411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:33.678337  641411 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956 for IP: 192.168.76.2
	I1227 09:37:33.678365  641411 certs.go:195] generating shared ca certs ...
	I1227 09:37:33.678390  641411 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:33.678571  641411 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:37:33.678626  641411 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:37:33.678639  641411 certs.go:257] generating profile certs ...
	I1227 09:37:33.678770  641411 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.key
	I1227 09:37:33.678871  641411 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc
	I1227 09:37:33.678929  641411 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key
	I1227 09:37:33.679062  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:33.679103  641411 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:33.679115  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:33.679152  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:33.679186  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:33.679217  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:33.679272  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:33.680163  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:33.700256  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:33.719575  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:33.739211  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:33.765510  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 09:37:33.783923  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:33.800754  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:33.817425  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:37:33.835912  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:33.857612  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:33.878374  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:33.894901  641411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:33.906496  641411 ssh_runner.go:195] Run: openssl version
	I1227 09:37:33.912309  641411 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:33.919094  641411 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:33.925901  641411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:33.929289  641411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:33.929330  641411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:33.963963  641411 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:33.972499  641411 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:33.980436  641411 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:33.987756  641411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:33.991337  641411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:33.991405  641411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:34.025577  641411 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:34.033620  641411 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:34.041719  641411 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:34.049022  641411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:34.052508  641411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:34.052553  641411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:34.087835  641411 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:34.095026  641411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:34.098686  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:37:34.135392  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:37:34.171147  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:37:34.221456  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:37:34.266559  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:37:34.328054  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:37:34.389758  641411 kubeadm.go:401] StartCluster: {Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:34.390081  641411 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:34.390187  641411 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:34.437285  641411 cri.go:96] found id: "2e65f48ec568aa4a15024b2eea0baac4a75982b1ef72891857cb7ece1b229a93"
	I1227 09:37:34.437317  641411 cri.go:96] found id: "f2b4d1b51b8b5d05a7022c93ea1d77fcf987df7336ffa16aa507218f305194b7"
	I1227 09:37:34.437323  641411 cri.go:96] found id: "5263ceb230522a69b34eb816b6de3b89096d950d9862e603732cbf4a1c75836e"
	I1227 09:37:34.437328  641411 cri.go:96] found id: "270f3f6fdb6aa9105489c1aa87a857c489190a97d3b5dadc4b9357252b50af3a"
	I1227 09:37:34.437332  641411 cri.go:96] found id: ""
	I1227 09:37:34.437382  641411 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:37:34.456922  641411 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:34Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:37:34.457009  641411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:34.473303  641411 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:37:34.473335  641411 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:37:34.473385  641411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:37:34.483013  641411 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:37:34.483924  641411 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-246956" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:34.484378  641411 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-246956" cluster setting kubeconfig missing "newest-cni-246956" context setting]
	I1227 09:37:34.485153  641411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:34.487339  641411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:37:34.497863  641411 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 09:37:34.497895  641411 kubeadm.go:602] duration metric: took 24.553545ms to restartPrimaryControlPlane
	I1227 09:37:34.497906  641411 kubeadm.go:403] duration metric: took 108.159942ms to StartCluster
	I1227 09:37:34.497923  641411 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:34.497980  641411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:34.499333  641411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:34.499604  641411 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:34.499714  641411 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:37:34.499830  641411 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-246956"
	I1227 09:37:34.499847  641411 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-246956"
	W1227 09:37:34.499855  641411 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:37:34.499872  641411 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:34.499884  641411 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:34.499927  641411 addons.go:70] Setting dashboard=true in profile "newest-cni-246956"
	I1227 09:37:34.499940  641411 addons.go:239] Setting addon dashboard=true in "newest-cni-246956"
	W1227 09:37:34.499949  641411 addons.go:248] addon dashboard should already be in state true
	I1227 09:37:34.499975  641411 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:34.500407  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:34.500508  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:34.500508  641411 addons.go:70] Setting default-storageclass=true in profile "newest-cni-246956"
	I1227 09:37:34.500530  641411 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-246956"
	I1227 09:37:34.500863  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:34.501608  641411 out.go:179] * Verifying Kubernetes components...
	I1227 09:37:34.502594  641411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:34.533328  641411 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:37:34.536455  641411 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 09:37:34.536570  641411 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:34.536581  641411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:37:34.536641  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:34.541757  641411 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1227 09:37:30.537950  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:32.538158  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:34.547357  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:31.875863  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	W1227 09:37:34.377714  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	I1227 09:37:34.542155  641411 addons.go:239] Setting addon default-storageclass=true in "newest-cni-246956"
	W1227 09:37:34.542177  641411 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:37:34.542206  641411 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:34.542658  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:34.543046  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:37:34.543064  641411 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:37:34.543149  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:34.569737  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:34.579505  641411 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:34.579526  641411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:37:34.579588  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:34.584102  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:34.608811  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:34.685560  641411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:34.696216  641411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:34.704691  641411 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:37:34.704856  641411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:37:34.713073  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:37:34.713098  641411 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:37:34.728707  641411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:34.731147  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:37:34.731173  641411 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:37:34.765845  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:37:34.765878  641411 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:37:34.789264  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:37:34.789291  641411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:37:34.814432  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:37:34.814462  641411 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:37:34.831380  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:37:34.831417  641411 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:37:34.848477  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:37:34.848504  641411 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:37:34.863996  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:37:34.864027  641411 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:37:34.883520  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:37:34.883549  641411 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:37:34.900847  641411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:37:36.406511  641411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.710183067s)
	I1227 09:37:36.406554  641411 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.701672543s)
	I1227 09:37:36.406594  641411 api_server.go:72] duration metric: took 1.906951366s to wait for apiserver process to appear ...
	I1227 09:37:36.406603  641411 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:37:36.406617  641411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.677865022s)
	I1227 09:37:36.406628  641411 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:37:36.406758  641411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.505868765s)
	I1227 09:37:36.411098  641411 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-246956 addons enable metrics-server
	
	I1227 09:37:36.412244  641411 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:37:36.412262  641411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:37:36.420084  641411 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:37:36.421032  641411 addons.go:530] duration metric: took 1.921329634s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:37:36.907243  641411 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:37:36.912278  641411 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:37:36.912313  641411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:37:37.407602  641411 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:37:37.411578  641411 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 09:37:37.412840  641411 api_server.go:141] control plane version: v1.35.0
	I1227 09:37:37.412870  641411 api_server.go:131] duration metric: took 1.006255426s to wait for apiserver health ...
	I1227 09:37:37.412882  641411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:37:37.416623  641411 system_pods.go:59] 8 kube-system pods found
	I1227 09:37:37.416669  641411 system_pods.go:61] "coredns-7d764666f9-kqzph" [cd4faccb-5994-46cb-a83b-d554df2fb8f2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 09:37:37.416685  641411 system_pods.go:61] "etcd-newest-cni-246956" [26721526-906a-4949-a50f-92ea210b80be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:37.416700  641411 system_pods.go:61] "kindnet-lmtxw" [e2185b04-5cba-4c54-86e0-9c2515f95074] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:37:37.416710  641411 system_pods.go:61] "kube-apiserver-newest-cni-246956" [7e3043fd-edc4-4182-8659-eba54f67a2d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:37.416732  641411 system_pods.go:61] "kube-controller-manager-newest-cni-246956" [7a30adc9-ce06-4908-8f0b-ed3da78f6394] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:37.416745  641411 system_pods.go:61] "kube-proxy-65ltj" [a1e5773a-e15f-405b-bca5-62a52d6e83a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:37:37.416753  641411 system_pods.go:61] "kube-scheduler-newest-cni-246956" [e515cbde-415b-4a69-b0be-a4c87c86858e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:37.416760  641411 system_pods.go:61] "storage-provisioner" [0735bc86-6017-4c08-8562-4a36fe686929] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 09:37:37.416771  641411 system_pods.go:74] duration metric: took 3.882996ms to wait for pod list to return data ...
	I1227 09:37:37.416784  641411 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:37:37.419297  641411 default_sa.go:45] found service account: "default"
	I1227 09:37:37.419317  641411 default_sa.go:55] duration metric: took 2.526247ms for default service account to be created ...
	I1227 09:37:37.419328  641411 kubeadm.go:587] duration metric: took 2.919690651s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 09:37:37.419352  641411 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:37:37.421724  641411 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:37:37.421754  641411 node_conditions.go:123] node cpu capacity is 8
	I1227 09:37:37.421770  641411 node_conditions.go:105] duration metric: took 2.41226ms to run NodePressure ...
	I1227 09:37:37.421785  641411 start.go:242] waiting for startup goroutines ...
	I1227 09:37:37.421826  641411 start.go:247] waiting for cluster config update ...
	I1227 09:37:37.421844  641411 start.go:256] writing updated cluster config ...
	I1227 09:37:37.422146  641411 ssh_runner.go:195] Run: rm -f paused
	I1227 09:37:37.473689  641411 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:37:37.476027  641411 out.go:179] * Done! kubectl is now configured to use "newest-cni-246956" cluster and "default" namespace by default
	I1227 09:37:33.474697  640477 out.go:252]   - Booting up control plane ...
	I1227 09:37:33.474850  640477 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:37:33.474954  640477 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:37:33.476497  640477 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:37:33.495717  640477 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:37:33.495865  640477 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:37:33.502491  640477 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:37:33.502850  640477 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:37:33.502916  640477 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:37:33.613095  640477 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:37:33.613272  640477 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:37:34.114153  640477 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.316468ms
	I1227 09:37:34.118694  640477 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 09:37:34.118842  640477 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1227 09:37:34.118945  640477 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 09:37:34.119024  640477 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 09:37:35.123740  640477 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004896698s
	I1227 09:37:35.928654  640477 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.809807783s
	I1227 09:37:37.620162  640477 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501336374s
	I1227 09:37:37.636307  640477 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 09:37:37.648028  640477 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 09:37:37.655852  640477 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 09:37:37.656073  640477 kubeadm.go:319] [mark-control-plane] Marking the node auto-157923 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 09:37:37.664167  640477 kubeadm.go:319] [bootstrap-token] Using token: h84766.ayttqn53s0bkjj4i
	I1227 09:37:37.665421  640477 out.go:252]   - Configuring RBAC rules ...
	I1227 09:37:37.665584  640477 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 09:37:37.668276  640477 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 09:37:37.673023  640477 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 09:37:37.675538  640477 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 09:37:37.678012  640477 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 09:37:37.681014  640477 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 09:37:38.026331  640477 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 09:37:38.440680  640477 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 09:37:39.027874  640477 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 09:37:39.029989  640477 kubeadm.go:319] 
	I1227 09:37:39.030077  640477 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 09:37:39.030091  640477 kubeadm.go:319] 
	I1227 09:37:39.030185  640477 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 09:37:39.030196  640477 kubeadm.go:319] 
	I1227 09:37:39.030226  640477 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 09:37:39.030293  640477 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 09:37:39.030352  640477 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 09:37:39.030358  640477 kubeadm.go:319] 
	I1227 09:37:39.030435  640477 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 09:37:39.030441  640477 kubeadm.go:319] 
	I1227 09:37:39.030498  640477 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 09:37:39.030504  640477 kubeadm.go:319] 
	I1227 09:37:39.030560  640477 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 09:37:39.030651  640477 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 09:37:39.030735  640477 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 09:37:39.030742  640477 kubeadm.go:319] 
	I1227 09:37:39.030870  640477 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 09:37:39.030967  640477 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 09:37:39.030972  640477 kubeadm.go:319] 
	I1227 09:37:39.031068  640477 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token h84766.ayttqn53s0bkjj4i \
	I1227 09:37:39.031198  640477 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 \
	I1227 09:37:39.031224  640477 kubeadm.go:319] 	--control-plane 
	I1227 09:37:39.031230  640477 kubeadm.go:319] 
	I1227 09:37:39.032078  640477 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 09:37:39.032096  640477 kubeadm.go:319] 
	I1227 09:37:39.032199  640477 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token h84766.ayttqn53s0bkjj4i \
	I1227 09:37:39.032327  640477 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 
	I1227 09:37:39.037635  640477 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 09:37:39.037837  640477 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:37:39.037892  640477 cni.go:84] Creating CNI manager for ""
	I1227 09:37:39.037903  640477 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:39.039169  640477 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1227 09:37:37.038856  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	I1227 09:37:39.040488  629532 pod_ready.go:94] pod "coredns-7d764666f9-wnzhx" is "Ready"
	I1227 09:37:39.040525  629532 pod_ready.go:86] duration metric: took 38.008133875s for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.043728  629532 pod_ready.go:83] waiting for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.048854  629532 pod_ready.go:94] pod "etcd-no-preload-963457" is "Ready"
	I1227 09:37:39.048879  629532 pod_ready.go:86] duration metric: took 5.124573ms for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.050919  629532 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.054702  629532 pod_ready.go:94] pod "kube-apiserver-no-preload-963457" is "Ready"
	I1227 09:37:39.054722  629532 pod_ready.go:86] duration metric: took 3.783045ms for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.056399  629532 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.237401  629532 pod_ready.go:94] pod "kube-controller-manager-no-preload-963457" is "Ready"
	I1227 09:37:39.237437  629532 pod_ready.go:86] duration metric: took 181.018956ms for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.437496  629532 pod_ready.go:83] waiting for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.838657  629532 pod_ready.go:94] pod "kube-proxy-grkqs" is "Ready"
	I1227 09:37:39.838692  629532 pod_ready.go:86] duration metric: took 401.169709ms for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:40.039092  629532 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:40.436525  629532 pod_ready.go:94] pod "kube-scheduler-no-preload-963457" is "Ready"
	I1227 09:37:40.436550  629532 pod_ready.go:86] duration metric: took 397.425819ms for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:40.436560  629532 pod_ready.go:40] duration metric: took 39.410112609s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:37:40.484672  629532 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:37:40.485924  629532 out.go:179] * Done! kubectl is now configured to use "no-preload-963457" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.054714922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.058226195Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a4965bd7-340c-4fe8-a7c9-822559e37ae9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.058617857Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c8e2b7fb-34a9-42e9-a19f-6baa2b2f9f7b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.059918259Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.060566234Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.060874115Z" level=info msg="Ran pod sandbox bdf4b8ed50480bd375dfe42bfe4c4c765aa57352edec78e6c970632ad7589d6d with infra container: kube-system/kube-proxy-65ltj/POD" id=a4965bd7-340c-4fe8-a7c9-822559e37ae9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.061734531Z" level=info msg="Ran pod sandbox 32524c18395e5c8435dc339ef971166ecdb8d7ac0de67d68e2d4915b82189f24 with infra container: kube-system/kindnet-lmtxw/POD" id=c8e2b7fb-34a9-42e9-a19f-6baa2b2f9f7b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.063481263Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=ad8994c7-3512-4255-859c-a3d75f244bf2 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.063887152Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=b9c65461-d492-43c5-8537-8bb16c54e3c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.064416371Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=a21e03e1-19dc-4498-863a-1c294284cd61 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.064808319Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=6e26450f-7192-469c-9367-acb59eb56ca8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.065387161Z" level=info msg="Creating container: kube-system/kube-proxy-65ltj/kube-proxy" id=d13a0345-c46f-47c5-af0b-e2352eeb574c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.065501363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.065740225Z" level=info msg="Creating container: kube-system/kindnet-lmtxw/kindnet-cni" id=da88b3ec-e125-42f8-ba9c-b372af5f0e2b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.065851462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.070493203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.071151535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.072369563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.072916882Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.103159564Z" level=info msg="Created container 90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522: kube-system/kindnet-lmtxw/kindnet-cni" id=da88b3ec-e125-42f8-ba9c-b372af5f0e2b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.103955627Z" level=info msg="Starting container: 90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522" id=e62cb125-e441-48aa-beaa-02a95bbd1a33 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.106278724Z" level=info msg="Started container" PID=1052 containerID=90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522 description=kube-system/kindnet-lmtxw/kindnet-cni id=e62cb125-e441-48aa-beaa-02a95bbd1a33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=32524c18395e5c8435dc339ef971166ecdb8d7ac0de67d68e2d4915b82189f24
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.110094189Z" level=info msg="Created container af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95: kube-system/kube-proxy-65ltj/kube-proxy" id=d13a0345-c46f-47c5-af0b-e2352eeb574c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.110585531Z" level=info msg="Starting container: af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95" id=eefb4a76-0f18-4c33-814f-1d667aa3cca2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.113733623Z" level=info msg="Started container" PID=1051 containerID=af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95 description=kube-system/kube-proxy-65ltj/kube-proxy id=eefb4a76-0f18-4c33-814f-1d667aa3cca2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bdf4b8ed50480bd375dfe42bfe4c4c765aa57352edec78e6c970632ad7589d6d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	90d4a673309e9       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   4 seconds ago       Running             kindnet-cni               1                   32524c18395e5       kindnet-lmtxw                               kube-system
	af83311425132       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   4 seconds ago       Running             kube-proxy                1                   bdf4b8ed50480       kube-proxy-65ltj                            kube-system
	2e65f48ec568a       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   6 seconds ago       Running             kube-scheduler            1                   2e72c9f7fc855       kube-scheduler-newest-cni-246956            kube-system
	f2b4d1b51b8b5       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   6 seconds ago       Running             kube-apiserver            1                   29cffb355d019       kube-apiserver-newest-cni-246956            kube-system
	5263ceb230522       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   6 seconds ago       Running             etcd                      1                   7580047475c45       etcd-newest-cni-246956                      kube-system
	270f3f6fdb6aa       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   6 seconds ago       Running             kube-controller-manager   1                   09e80cbef7153       kube-controller-manager-newest-cni-246956   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-246956
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-246956
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=newest-cni-246956
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_37_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:37:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-246956
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:37:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:37:36 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:37:36 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:37:36 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 09:37:36 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-246956
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                0282d3f4-6f31-42cc-85b2-77d015ffb093
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-246956                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-lmtxw                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-246956             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-246956    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-65ltj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-246956             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node newest-cni-246956 event: Registered Node newest-cni-246956 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-246956 event: Registered Node newest-cni-246956 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [5263ceb230522a69b34eb816b6de3b89096d950d9862e603732cbf4a1c75836e] <==
	{"level":"info","ts":"2025-12-27T09:37:34.348501Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:37:34.348527Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:37:34.348722Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:37:34.348743Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:37:34.349375Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T09:37:34.349459Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T09:37:34.349536Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T09:37:34.739638Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:34.739688Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:34.739782Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:34.739911Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:37:34.739939Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:34.741201Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:34.741300Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:37:34.741345Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:34.741375Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:34.742482Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-246956 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:37:34.742534Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:37:34.742482Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:37:34.743442Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:37:34.743480Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:37:34.743924Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:37:34.745155Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:37:34.747439Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:37:34.749234Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 09:37:41 up  1:20,  0 user,  load average: 3.47, 3.15, 2.38
	Linux newest-cni-246956 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522] <==
	I1227 09:37:37.373599       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:37:37.373875       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 09:37:37.374004       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:37:37.374027       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:37:37.374046       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:37:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:37:37.577204       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:37:37.577311       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:37:37.577329       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:37:37.577482       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:37:37.877462       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:37:37.877491       1 metrics.go:72] Registering metrics
	I1227 09:37:37.877597       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [f2b4d1b51b8b5d05a7022c93ea1d77fcf987df7336ffa16aa507218f305194b7] <==
	I1227 09:37:35.922029       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 09:37:35.938671       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:35.939069       1 policy_source.go:248] refreshing policies
	I1227 09:37:35.939262       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:37:35.991048       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 09:37:35.991301       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 09:37:35.991196       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 09:37:35.991166       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 09:37:35.991263       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:37:35.991279       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 09:37:35.999230       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 09:37:36.005336       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:37:36.215497       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:37:36.243568       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:37:36.259137       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:37:36.265243       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:37:36.271838       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:37:36.297896       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.129.221"}
	I1227 09:37:36.306953       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.95.221"}
	I1227 09:37:36.795291       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:37:39.564557       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:37:39.564607       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:37:39.615970       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:37:39.666312       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:37:39.715857       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [270f3f6fdb6aa9105489c1aa87a857c489190a97d3b5dadc4b9357252b50af3a] <==
	I1227 09:37:39.019086       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.018737       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019177       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019183       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019201       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019365       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019380       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019380       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019403       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019436       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019495       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019499       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019533       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019774       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019845       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019983       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 09:37:39.020082       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-246956"
	I1227 09:37:39.020151       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 09:37:39.020574       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.028033       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:39.034928       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.119323       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.119344       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:37:39.119349       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:37:39.128481       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95] <==
	I1227 09:37:37.153344       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:37:37.225876       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:37.326472       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:37.326508       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 09:37:37.326633       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:37:37.348492       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:37:37.348540       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:37:37.354044       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:37:37.354427       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:37:37.354447       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:37:37.356123       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:37:37.356295       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:37:37.356328       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:37:37.356334       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:37:37.356121       1 config.go:200] "Starting service config controller"
	I1227 09:37:37.356348       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:37:37.356491       1 config.go:309] "Starting node config controller"
	I1227 09:37:37.356504       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:37:37.356511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:37:37.456898       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:37:37.456920       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:37:37.456939       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2e65f48ec568aa4a15024b2eea0baac4a75982b1ef72891857cb7ece1b229a93] <==
	I1227 09:37:34.509843       1 serving.go:386] Generated self-signed cert in-memory
	W1227 09:37:35.848838       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:37:35.848880       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:37:35.848895       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:37:35.848904       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:37:35.953531       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 09:37:35.953637       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:37:35.956964       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 09:37:35.957862       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:37:35.957886       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:35.957917       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 09:37:36.058819       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.028136     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-246956\" already exists" pod="kube-system/kube-scheduler-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.028274     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.033958     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-246956\" already exists" pod="kube-system/etcd-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.746382     673 apiserver.go:52] "Watching apiserver"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.750549     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-246956" containerName="kube-controller-manager"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.796315     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.796414     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.796550     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-246956" containerName="kube-apiserver"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.801879     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-246956\" already exists" pod="kube-system/kube-scheduler-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.802090     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-246956" containerName="kube-scheduler"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.802520     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-246956\" already exists" pod="kube-system/etcd-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.802604     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-246956" containerName="etcd"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.850328     673 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.940640     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1e5773a-e15f-405b-bca5-62a52d6e83a2-lib-modules\") pod \"kube-proxy-65ltj\" (UID: \"a1e5773a-e15f-405b-bca5-62a52d6e83a2\") " pod="kube-system/kube-proxy-65ltj"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.940708     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2185b04-5cba-4c54-86e0-9c2515f95074-xtables-lock\") pod \"kindnet-lmtxw\" (UID: \"e2185b04-5cba-4c54-86e0-9c2515f95074\") " pod="kube-system/kindnet-lmtxw"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.940745     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1e5773a-e15f-405b-bca5-62a52d6e83a2-xtables-lock\") pod \"kube-proxy-65ltj\" (UID: \"a1e5773a-e15f-405b-bca5-62a52d6e83a2\") " pod="kube-system/kube-proxy-65ltj"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.940764     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2185b04-5cba-4c54-86e0-9c2515f95074-lib-modules\") pod \"kindnet-lmtxw\" (UID: \"e2185b04-5cba-4c54-86e0-9c2515f95074\") " pod="kube-system/kindnet-lmtxw"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.940827     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e2185b04-5cba-4c54-86e0-9c2515f95074-cni-cfg\") pod \"kindnet-lmtxw\" (UID: \"e2185b04-5cba-4c54-86e0-9c2515f95074\") " pod="kube-system/kindnet-lmtxw"
	Dec 27 09:37:37 newest-cni-246956 kubelet[673]: E1227 09:37:37.345839     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-246956" containerName="kube-controller-manager"
	Dec 27 09:37:37 newest-cni-246956 kubelet[673]: E1227 09:37:37.802817     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-246956" containerName="etcd"
	Dec 27 09:37:37 newest-cni-246956 kubelet[673]: E1227 09:37:37.802998     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-246956" containerName="kube-scheduler"
	Dec 27 09:37:38 newest-cni-246956 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:37:38 newest-cni-246956 kubelet[673]: I1227 09:37:38.527957     673 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 09:37:38 newest-cni-246956 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:37:38 newest-cni-246956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-246956 -n newest-cni-246956
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-246956 -n newest-cni-246956: exit status 2 (348.219812ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-246956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-kqzph storage-provisioner dashboard-metrics-scraper-867fb5f87b-9t9n8 kubernetes-dashboard-b84665fb8-9j86m
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-246956 describe pod coredns-7d764666f9-kqzph storage-provisioner dashboard-metrics-scraper-867fb5f87b-9t9n8 kubernetes-dashboard-b84665fb8-9j86m
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-246956 describe pod coredns-7d764666f9-kqzph storage-provisioner dashboard-metrics-scraper-867fb5f87b-9t9n8 kubernetes-dashboard-b84665fb8-9j86m: exit status 1 (60.268745ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-kqzph" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-9t9n8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-9j86m" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-246956 describe pod coredns-7d764666f9-kqzph storage-provisioner dashboard-metrics-scraper-867fb5f87b-9t9n8 kubernetes-dashboard-b84665fb8-9j86m: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-246956
helpers_test.go:244: (dbg) docker inspect newest-cni-246956:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b",
	        "Created": "2025-12-27T09:36:55.867553755Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 641826,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:37:27.61803496Z",
	            "FinishedAt": "2025-12-27T09:37:26.333746866Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/hostname",
	        "HostsPath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/hosts",
	        "LogPath": "/var/lib/docker/containers/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b/69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b-json.log",
	        "Name": "/newest-cni-246956",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-246956:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-246956",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69aebd25b47b0182c49b994a470b77a9e5869eea562f8bba9c36596e6b80b45b",
	                "LowerDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e04614f1bb8ab7149b79d93791f2b8a450769fb1c2807a94e1bc8526dafac32d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-246956",
	                "Source": "/var/lib/docker/volumes/newest-cni-246956/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-246956",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-246956",
	                "name.minikube.sigs.k8s.io": "newest-cni-246956",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "80814fc4ffd756c64bc6afc543b1698a6355b1ce5c6eb3c2d1b8bb82df0bf57d",
	            "SandboxKey": "/var/run/docker/netns/80814fc4ffd7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-246956": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cdd553080fb08dffc74b490be6da4ecef89ab2e05674ec3b26e123185e152840",
	                    "EndpointID": "5376213e84fdc700e2cad491390ea11fa6f0287948a6e68e7d0ebe44714ce84d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "36:2c:8b:35:b7:fc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-246956",
	                        "69aebd25b47b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-246956 -n newest-cni-246956
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-246956 -n newest-cni-246956: exit status 2 (330.75169ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-246956 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-963457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p no-preload-963457 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-497722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-497722 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ image   │ old-k8s-version-094398 image list --format=json                                                                                                                                                                                               │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ pause   │ -p old-k8s-version-094398 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │                     │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p no-preload-963457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-497722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ image   │ embed-certs-912564 image list --format=json                                                                                                                                                                                                   │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p embed-certs-912564 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-246956 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p auto-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-157923                  │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-246956 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ image   │ newest-cni-246956 image list --format=json                                                                                                                                                                                                    │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p newest-cni-246956 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:37:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:37:27.378899  641411 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:27.379011  641411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:27.379016  641411 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:27.379021  641411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:27.379238  641411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:37:27.379675  641411 out.go:368] Setting JSON to false
	I1227 09:37:27.380937  641411 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4791,"bootTime":1766823456,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:37:27.380997  641411 start.go:143] virtualization: kvm guest
	I1227 09:37:27.382516  641411 out.go:179] * [newest-cni-246956] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:37:27.383534  641411 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:37:27.383557  641411 notify.go:221] Checking for updates...
	I1227 09:37:27.385354  641411 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:37:27.386240  641411 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:27.387079  641411 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:37:27.391187  641411 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:37:27.392096  641411 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:37:27.393369  641411 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:27.394370  641411 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:37:27.421776  641411 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:37:27.421886  641411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:27.481486  641411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-27 09:37:27.470853214 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:27.481650  641411 docker.go:319] overlay module found
	I1227 09:37:27.483174  641411 out.go:179] * Using the docker driver based on existing profile
	I1227 09:37:27.484194  641411 start.go:309] selected driver: docker
	I1227 09:37:27.484211  641411 start.go:928] validating driver "docker" against &{Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:27.484329  641411 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:37:27.485149  641411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:27.541981  641411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-27 09:37:27.531020677 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:27.542283  641411 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 09:37:27.542328  641411 cni.go:84] Creating CNI manager for ""
	I1227 09:37:27.542404  641411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:27.542469  641411 start.go:353] cluster config:
	{Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:27.544837  641411 out.go:179] * Starting "newest-cni-246956" primary control-plane node in "newest-cni-246956" cluster
	I1227 09:37:27.545721  641411 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:37:27.546820  641411 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:37:27.547908  641411 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:27.547944  641411 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:37:27.547954  641411 cache.go:65] Caching tarball of preloaded images
	I1227 09:37:27.548011  641411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:37:27.548043  641411 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:37:27.548053  641411 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:37:27.548175  641411 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/config.json ...
	I1227 09:37:27.567870  641411 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:37:27.567891  641411 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:37:27.567905  641411 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:37:27.567949  641411 start.go:360] acquireMachinesLock for newest-cni-246956: {Name:mkce071e540487b97cbc77937d99e9ae86cc89ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:37:27.568004  641411 start.go:364] duration metric: took 37.764µs to acquireMachinesLock for "newest-cni-246956"
	I1227 09:37:27.568022  641411 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:37:27.568028  641411 fix.go:54] fixHost starting: 
	I1227 09:37:27.568299  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:27.590667  641411 fix.go:112] recreateIfNeeded on newest-cni-246956: state=Stopped err=<nil>
	W1227 09:37:27.590698  641411 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:37:23.593860  640477 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:37:23.594049  640477 start.go:159] libmachine.API.Create for "auto-157923" (driver="docker")
	I1227 09:37:23.594077  640477 client.go:173] LocalClient.Create starting
	I1227 09:37:23.594137  640477 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:37:23.594167  640477 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:23.594185  640477 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:23.594243  640477 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:37:23.594262  640477 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:23.594273  640477 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:23.594578  640477 cli_runner.go:164] Run: docker network inspect auto-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:37:23.610819  640477 cli_runner.go:211] docker network inspect auto-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:37:23.610896  640477 network_create.go:284] running [docker network inspect auto-157923] to gather additional debugging logs...
	I1227 09:37:23.610918  640477 cli_runner.go:164] Run: docker network inspect auto-157923
	W1227 09:37:23.627099  640477 cli_runner.go:211] docker network inspect auto-157923 returned with exit code 1
	I1227 09:37:23.627123  640477 network_create.go:287] error running [docker network inspect auto-157923]: docker network inspect auto-157923: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-157923 not found
	I1227 09:37:23.627134  640477 network_create.go:289] output of [docker network inspect auto-157923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-157923 not found
	
	** /stderr **
	I1227 09:37:23.627260  640477 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:23.644238  640477 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
	I1227 09:37:23.645188  640477 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-21a699476be6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:e8:d9:95:e6:36} reservation:<nil>}
	I1227 09:37:23.645807  640477 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e97c5356905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:d9:6b:42:f5:e3} reservation:<nil>}
	I1227 09:37:23.646472  640477 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cdd553080fb0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:73:29:56:1f:e3} reservation:<nil>}
	I1227 09:37:23.647038  640477 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e27fce9ec482 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:8a:0d:7f:bc:3d} reservation:<nil>}
	I1227 09:37:23.647836  640477 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001df2770}
	I1227 09:37:23.647858  640477 network_create.go:124] attempt to create docker network auto-157923 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1227 09:37:23.647897  640477 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-157923 auto-157923
	I1227 09:37:23.696556  640477 network_create.go:108] docker network auto-157923 192.168.94.0/24 created
	I1227 09:37:23.696603  640477 kic.go:121] calculated static IP "192.168.94.2" for the "auto-157923" container
	I1227 09:37:23.696690  640477 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:37:23.713970  640477 cli_runner.go:164] Run: docker volume create auto-157923 --label name.minikube.sigs.k8s.io=auto-157923 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:37:23.732094  640477 oci.go:103] Successfully created a docker volume auto-157923
	I1227 09:37:23.732173  640477 cli_runner.go:164] Run: docker run --rm --name auto-157923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-157923 --entrypoint /usr/bin/test -v auto-157923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:37:24.133562  640477 oci.go:107] Successfully prepared a docker volume auto-157923
	I1227 09:37:24.133644  640477 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:24.133660  640477 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:37:24.133725  640477 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:37:27.070484  640477 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (2.936698922s)
	I1227 09:37:27.070519  640477 kic.go:203] duration metric: took 2.936854305s to extract preloaded images to volume ...
	W1227 09:37:27.070624  640477 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:37:27.070668  640477 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:37:27.070719  640477 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:37:27.125901  640477 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-157923 --name auto-157923 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-157923 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-157923 --network auto-157923 --ip 192.168.94.2 --volume auto-157923:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:37:27.415302  640477 cli_runner.go:164] Run: docker container inspect auto-157923 --format={{.State.Running}}
	I1227 09:37:27.434776  640477 cli_runner.go:164] Run: docker container inspect auto-157923 --format={{.State.Status}}
	I1227 09:37:27.457848  640477 cli_runner.go:164] Run: docker exec auto-157923 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:37:27.504821  640477 oci.go:144] the created container "auto-157923" has a running status.
	I1227 09:37:27.504859  640477 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa...
	I1227 09:37:27.658588  640477 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:37:27.687371  640477 cli_runner.go:164] Run: docker container inspect auto-157923 --format={{.State.Status}}
	I1227 09:37:27.712940  640477 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:37:27.712967  640477 kic_runner.go:114] Args: [docker exec --privileged auto-157923 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:37:27.773647  640477 cli_runner.go:164] Run: docker container inspect auto-157923 --format={{.State.Status}}
	I1227 09:37:27.796172  640477 machine.go:94] provisionDockerMachine start ...
	I1227 09:37:27.796272  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:27.818454  640477 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:27.818781  640477 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1227 09:37:27.818812  640477 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:37:27.956997  640477 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-157923
	
	I1227 09:37:27.957026  640477 ubuntu.go:182] provisioning hostname "auto-157923"
	I1227 09:37:27.957086  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:27.976063  640477 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:27.976373  640477 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1227 09:37:27.976395  640477 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-157923 && echo "auto-157923" | sudo tee /etc/hostname
	I1227 09:37:28.114313  640477 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-157923
	
	I1227 09:37:28.114408  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:28.139306  640477 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:28.139654  640477 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1227 09:37:28.139678  640477 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-157923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-157923/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-157923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:37:28.269032  640477 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:37:28.269063  640477 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:37:28.269094  640477 ubuntu.go:190] setting up certificates
	I1227 09:37:28.269116  640477 provision.go:84] configureAuth start
	I1227 09:37:28.269181  640477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-157923
	I1227 09:37:28.285950  640477 provision.go:143] copyHostCerts
	I1227 09:37:28.286010  640477 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:37:28.286026  640477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:37:28.286104  640477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:37:28.286245  640477 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:37:28.286258  640477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:37:28.286301  640477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:37:28.286414  640477 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:37:28.286426  640477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:37:28.286460  640477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:37:28.286554  640477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.auto-157923 san=[127.0.0.1 192.168.94.2 auto-157923 localhost minikube]
	W1227 09:37:25.538466  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:28.038757  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	I1227 09:37:28.443502  640477 provision.go:177] copyRemoteCerts
	I1227 09:37:28.443559  640477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:37:28.443593  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:28.461329  640477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa Username:docker}
	I1227 09:37:28.550382  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:37:28.568972  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1227 09:37:28.585729  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:37:28.602155  640477 provision.go:87] duration metric: took 333.013948ms to configureAuth
	I1227 09:37:28.602182  640477 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:37:28.602341  640477 config.go:182] Loaded profile config "auto-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:28.602449  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:28.619541  640477 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:28.619769  640477 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1227 09:37:28.619806  640477 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:37:28.883754  640477 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:37:28.883780  640477 machine.go:97] duration metric: took 1.087580783s to provisionDockerMachine
	I1227 09:37:28.883811  640477 client.go:176] duration metric: took 5.289706483s to LocalClient.Create
	I1227 09:37:28.883834  640477 start.go:167] duration metric: took 5.289784753s to libmachine.API.Create "auto-157923"
	I1227 09:37:28.883844  640477 start.go:293] postStartSetup for "auto-157923" (driver="docker")
	I1227 09:37:28.883856  640477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:37:28.883915  640477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:37:28.883952  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:28.901443  640477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa Username:docker}
	I1227 09:37:28.992208  640477 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:37:28.995626  640477 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:37:28.995659  640477 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:37:28.995674  640477 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:37:28.995736  640477 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:37:28.995857  640477 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:37:28.995999  640477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:37:29.003499  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:29.022508  640477 start.go:296] duration metric: took 138.650587ms for postStartSetup
	I1227 09:37:29.022880  640477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-157923
	I1227 09:37:29.041193  640477 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/config.json ...
	I1227 09:37:29.041452  640477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:37:29.041503  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:29.058225  640477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa Username:docker}
	I1227 09:37:29.146558  640477 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:37:29.150983  640477 start.go:128] duration metric: took 5.558498758s to createHost
	I1227 09:37:29.151006  640477 start.go:83] releasing machines lock for "auto-157923", held for 5.55862422s
	I1227 09:37:29.151081  640477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-157923
	I1227 09:37:29.168302  640477 ssh_runner.go:195] Run: cat /version.json
	I1227 09:37:29.168357  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:29.168399  640477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:37:29.168475  640477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-157923
	I1227 09:37:29.185648  640477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa Username:docker}
	I1227 09:37:29.187085  640477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/auto-157923/id_rsa Username:docker}
	I1227 09:37:29.327329  640477 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:29.333380  640477 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:37:29.367030  640477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:37:29.371491  640477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:37:29.371555  640477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:37:29.396934  640477 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 09:37:29.396955  640477 start.go:496] detecting cgroup driver to use...
	I1227 09:37:29.396991  640477 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:37:29.397037  640477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:37:29.413011  640477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:37:29.424413  640477 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:37:29.424469  640477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:37:29.440514  640477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:37:29.456555  640477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:37:29.535212  640477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:37:29.621192  640477 docker.go:234] disabling docker service ...
	I1227 09:37:29.621266  640477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:37:29.639272  640477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:37:29.651656  640477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:37:29.729754  640477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:37:29.811800  640477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:37:29.823537  640477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:37:29.837297  640477 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:37:29.837350  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.846843  640477 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:37:29.846891  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.855217  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.863186  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.871466  640477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:37:29.879936  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.888174  640477 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.901122  640477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:29.909263  640477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:37:29.916108  640477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:37:29.923007  640477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:29.999546  640477 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:37:30.136737  640477 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:37:30.136827  640477 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:37:30.140540  640477 start.go:574] Will wait 60s for crictl version
	I1227 09:37:30.140584  640477 ssh_runner.go:195] Run: which crictl
	I1227 09:37:30.144067  640477 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:37:30.169977  640477 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:37:30.170059  640477 ssh_runner.go:195] Run: crio --version
	I1227 09:37:30.197410  640477 ssh_runner.go:195] Run: crio --version
	I1227 09:37:30.225050  640477 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	W1227 09:37:26.875731  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	W1227 09:37:29.375572  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	I1227 09:37:30.225982  640477 cli_runner.go:164] Run: docker network inspect auto-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:30.243309  640477 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1227 09:37:30.247353  640477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:30.257557  640477 kubeadm.go:884] updating cluster {Name:auto-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:37:30.257694  640477 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:30.257737  640477 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:30.289512  640477 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:30.289537  640477 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:37:30.289592  640477 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:30.314579  640477 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:30.314601  640477 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:37:30.314608  640477 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1227 09:37:30.314692  640477 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-157923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:37:30.314755  640477 ssh_runner.go:195] Run: crio config
	I1227 09:37:30.360105  640477 cni.go:84] Creating CNI manager for ""
	I1227 09:37:30.360127  640477 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:30.360145  640477 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:37:30.360168  640477 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-157923 NodeName:auto-157923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:37:30.360293  640477 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-157923"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:37:30.360351  640477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:37:30.368460  640477 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:37:30.368522  640477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:37:30.376914  640477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1227 09:37:30.389564  640477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:37:30.404731  640477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1227 09:37:30.416866  640477 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:37:30.420364  640477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:30.429591  640477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:30.510148  640477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:30.536573  640477 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923 for IP: 192.168.94.2
	I1227 09:37:30.536596  640477 certs.go:195] generating shared ca certs ...
	I1227 09:37:30.536615  640477 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.536773  640477 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:37:30.536865  640477 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:37:30.536888  640477 certs.go:257] generating profile certs ...
	I1227 09:37:30.536964  640477 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.key
	I1227 09:37:30.536991  640477 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.crt with IP's: []
	I1227 09:37:30.624890  640477 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.crt ...
	I1227 09:37:30.624917  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.crt: {Name:mk5fec5dc889a050ff07ef9dc8a0ee9dc572cec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.625076  640477 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.key ...
	I1227 09:37:30.625088  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/client.key: {Name:mk88d35ccf30cb328ea28c0bdbaa05748553964d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.625165  640477 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key.abc95ca7
	I1227 09:37:30.625180  640477 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt.abc95ca7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1227 09:37:30.795196  640477 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt.abc95ca7 ...
	I1227 09:37:30.795236  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt.abc95ca7: {Name:mk35abb026792aa312a3fafcde9a2d75bf696072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.795436  640477 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key.abc95ca7 ...
	I1227 09:37:30.795455  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key.abc95ca7: {Name:mkb5743db85c3505685ca0680323d5bc1c7a1b98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.795567  640477 certs.go:382] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt.abc95ca7 -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt
	I1227 09:37:30.795652  640477 certs.go:386] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key.abc95ca7 -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key
	I1227 09:37:30.795708  640477 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.key
	I1227 09:37:30.795723  640477 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.crt with IP's: []
	I1227 09:37:30.998354  640477 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.crt ...
	I1227 09:37:30.998391  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.crt: {Name:mk8e3feffc991877f9962038e1715d34d9322f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.998591  640477 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.key ...
	I1227 09:37:30.998610  640477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.key: {Name:mkbad445ba173273276d15e93d1c494f347429b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:30.998863  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:30.998921  640477 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:30.998938  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:30.998972  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:30.999005  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:30.999038  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:30.999092  640477 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:30.999815  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:31.021838  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:31.041056  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:31.057858  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:31.075048  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1227 09:37:31.093905  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:31.112516  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:31.130323  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/auto-157923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:37:31.149098  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:31.168329  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:31.186296  640477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:31.203661  640477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:31.216953  640477 ssh_runner.go:195] Run: openssl version
	I1227 09:37:31.223419  640477 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:31.231562  640477 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:31.241117  640477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:31.245586  640477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:31.245653  640477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:31.285920  640477 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:31.293932  640477 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:37:31.302905  640477 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:31.310561  640477 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:31.318173  640477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:31.322222  640477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:31.322277  640477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:31.356462  640477 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:31.364014  640477 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/377171.pem /etc/ssl/certs/51391683.0
	I1227 09:37:31.371126  640477 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:31.378844  640477 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:31.386398  640477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:31.390084  640477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:31.390139  640477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:31.428052  640477 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:31.435848  640477 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3771712.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:31.443922  640477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:31.447673  640477 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:37:31.447725  640477 kubeadm.go:401] StartCluster: {Name:auto-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:31.447828  640477 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:31.447871  640477 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:31.475207  640477 cri.go:96] found id: ""
	I1227 09:37:31.475275  640477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:31.483398  640477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:37:31.491119  640477 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:37:31.491165  640477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:37:31.498704  640477 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:37:31.498718  640477 kubeadm.go:158] found existing configuration files:
	
	I1227 09:37:31.498751  640477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:37:31.506264  640477 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:37:31.506312  640477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:37:31.513506  640477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:37:31.520829  640477 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:37:31.520868  640477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:37:31.527975  640477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:37:31.536106  640477 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:37:31.536155  640477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:37:31.544285  640477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:37:31.552388  640477 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:37:31.552438  640477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:37:31.559610  640477 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:37:31.597300  640477 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:37:31.597363  640477 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:37:31.666942  640477 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:37:31.667040  640477 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 09:37:31.667077  640477 kubeadm.go:319] OS: Linux
	I1227 09:37:31.667155  640477 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:37:31.667269  640477 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:37:31.667358  640477 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:37:31.667426  640477 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:37:31.667509  640477 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:37:31.667580  640477 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:37:31.667657  640477 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:37:31.667719  640477 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 09:37:31.728917  640477 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:37:31.729067  640477 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:37:31.729228  640477 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:37:31.737418  640477 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:37:27.592142  641411 out.go:252] * Restarting existing docker container for "newest-cni-246956" ...
	I1227 09:37:27.592203  641411 cli_runner.go:164] Run: docker start newest-cni-246956
	I1227 09:37:27.886738  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:27.907223  641411 kic.go:430] container "newest-cni-246956" state is running.
	I1227 09:37:27.907710  641411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:37:27.928177  641411 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/config.json ...
	I1227 09:37:27.928467  641411 machine.go:94] provisionDockerMachine start ...
	I1227 09:37:27.928564  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:27.948875  641411 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:27.949165  641411 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1227 09:37:27.949178  641411 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:37:27.949816  641411 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58996->127.0.0.1:33478: read: connection reset by peer
	I1227 09:37:31.080379  641411 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-246956
	
	I1227 09:37:31.080410  641411 ubuntu.go:182] provisioning hostname "newest-cni-246956"
	I1227 09:37:31.080477  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:31.099049  641411 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:31.099379  641411 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1227 09:37:31.099406  641411 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-246956 && echo "newest-cni-246956" | sudo tee /etc/hostname
	I1227 09:37:31.234303  641411 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-246956
	
	I1227 09:37:31.234383  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:31.256173  641411 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:31.256398  641411 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1227 09:37:31.256429  641411 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-246956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-246956/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-246956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:37:31.384140  641411 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:37:31.384172  641411 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:37:31.384227  641411 ubuntu.go:190] setting up certificates
	I1227 09:37:31.384241  641411 provision.go:84] configureAuth start
	I1227 09:37:31.384296  641411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:37:31.404607  641411 provision.go:143] copyHostCerts
	I1227 09:37:31.404662  641411 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:37:31.404678  641411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:37:31.404741  641411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:37:31.404876  641411 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:37:31.404888  641411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:37:31.404921  641411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:37:31.404982  641411 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:37:31.404990  641411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:37:31.405014  641411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:37:31.405069  641411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.newest-cni-246956 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-246956]
	I1227 09:37:31.529908  641411 provision.go:177] copyRemoteCerts
	I1227 09:37:31.529965  641411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:37:31.530016  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:31.548471  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:31.643235  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:37:31.662940  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 09:37:31.681581  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:37:31.699489  641411 provision.go:87] duration metric: took 315.224154ms to configureAuth
	I1227 09:37:31.699512  641411 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:37:31.699705  641411 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:31.699846  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:31.720243  641411 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:31.720532  641411 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1227 09:37:31.720551  641411 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:37:32.003127  641411 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:37:32.003152  641411 machine.go:97] duration metric: took 4.074663584s to provisionDockerMachine
	I1227 09:37:32.003168  641411 start.go:293] postStartSetup for "newest-cni-246956" (driver="docker")
	I1227 09:37:32.003182  641411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:37:32.003242  641411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:37:32.003288  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:32.023832  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:32.116325  641411 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:37:32.119772  641411 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:37:32.119814  641411 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:37:32.119828  641411 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:37:32.119880  641411 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:37:32.119961  641411 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:37:32.120060  641411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:37:32.127840  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:32.145199  641411 start.go:296] duration metric: took 142.013746ms for postStartSetup
	I1227 09:37:32.145294  641411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:37:32.145339  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:32.164659  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:32.252473  641411 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:37:32.257294  641411 fix.go:56] duration metric: took 4.68926s for fixHost
	I1227 09:37:32.257322  641411 start.go:83] releasing machines lock for "newest-cni-246956", held for 4.689308555s
	I1227 09:37:32.257385  641411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-246956
	I1227 09:37:32.275910  641411 ssh_runner.go:195] Run: cat /version.json
	I1227 09:37:32.275959  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:32.276004  641411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:37:32.276088  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:32.295004  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:32.295362  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:32.442235  641411 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:32.448779  641411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:37:32.483613  641411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:37:32.488246  641411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:37:32.488302  641411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:37:32.496138  641411 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:37:32.496164  641411 start.go:496] detecting cgroup driver to use...
	I1227 09:37:32.496193  641411 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:37:32.496227  641411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:37:32.510039  641411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:37:32.521549  641411 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:37:32.521624  641411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:37:32.536748  641411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:37:32.549010  641411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:37:32.628232  641411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:37:32.707229  641411 docker.go:234] disabling docker service ...
	I1227 09:37:32.707308  641411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:37:32.723133  641411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:37:32.735630  641411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:37:32.826300  641411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:37:32.909512  641411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:37:32.921687  641411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:37:32.935959  641411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:37:32.936011  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.945093  641411 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:37:32.945146  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.954219  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.962484  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.970616  641411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:37:32.978185  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.986920  641411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:32.994668  641411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:33.002741  641411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:37:33.009746  641411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:37:33.016488  641411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:33.094917  641411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:37:33.241717  641411 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:37:33.241836  641411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:37:33.245878  641411 start.go:574] Will wait 60s for crictl version
	I1227 09:37:33.245919  641411 ssh_runner.go:195] Run: which crictl
	I1227 09:37:33.249536  641411 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:37:33.272986  641411 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:37:33.273053  641411 ssh_runner.go:195] Run: crio --version
	I1227 09:37:33.301152  641411 ssh_runner.go:195] Run: crio --version
	I1227 09:37:33.330066  641411 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:37:33.331033  641411 cli_runner.go:164] Run: docker network inspect newest-cni-246956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:33.348463  641411 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:37:33.352489  641411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:33.363652  641411 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 09:37:31.739982  640477 out.go:252]   - Generating certificates and keys ...
	I1227 09:37:31.740085  640477 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:37:31.740186  640477 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:37:31.813670  640477 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:37:31.844564  640477 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:37:31.919681  640477 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:37:32.158345  640477 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:37:32.193433  640477 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:37:32.193573  640477 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-157923 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1227 09:37:32.256805  640477 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:37:32.256960  640477 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-157923 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1227 09:37:32.314748  640477 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:37:32.437985  640477 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:37:32.603035  640477 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:37:32.603107  640477 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:37:32.828246  640477 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:37:32.945526  640477 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:37:33.166491  640477 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:37:33.306335  640477 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:37:33.465489  640477 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:37:33.466384  640477 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:37:33.473319  640477 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:37:33.364464  641411 kubeadm.go:884] updating cluster {Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:37:33.364588  641411 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:33.364631  641411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:33.399571  641411 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:33.399600  641411 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:37:33.399654  641411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:33.424967  641411 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:33.424990  641411 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:37:33.425007  641411 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 09:37:33.425107  641411 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-246956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:37:33.425190  641411 ssh_runner.go:195] Run: crio config
	I1227 09:37:33.481692  641411 cni.go:84] Creating CNI manager for ""
	I1227 09:37:33.481718  641411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:33.481739  641411 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 09:37:33.481784  641411 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-246956 NodeName:newest-cni-246956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:37:33.481963  641411 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-246956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:37:33.482037  641411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:37:33.491243  641411 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:37:33.491305  641411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:37:33.499311  641411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 09:37:33.512666  641411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:37:33.524552  641411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1227 09:37:33.543595  641411 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:37:33.547713  641411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:33.558045  641411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:33.655308  641411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:33.678337  641411 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956 for IP: 192.168.76.2
	I1227 09:37:33.678365  641411 certs.go:195] generating shared ca certs ...
	I1227 09:37:33.678390  641411 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:33.678571  641411 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:37:33.678626  641411 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:37:33.678639  641411 certs.go:257] generating profile certs ...
	I1227 09:37:33.678770  641411 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/client.key
	I1227 09:37:33.678871  641411 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key.c99eabfc
	I1227 09:37:33.678929  641411 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key
	I1227 09:37:33.679062  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:33.679103  641411 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:33.679115  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:33.679152  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:33.679186  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:33.679217  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:33.679272  641411 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:33.680163  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:33.700256  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:33.719575  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:33.739211  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:33.765510  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 09:37:33.783923  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:33.800754  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:33.817425  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/newest-cni-246956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:37:33.835912  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:33.857612  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:33.878374  641411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:33.894901  641411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:33.906496  641411 ssh_runner.go:195] Run: openssl version
	I1227 09:37:33.912309  641411 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:33.919094  641411 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:33.925901  641411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:33.929289  641411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:33.929330  641411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:33.963963  641411 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:33.972499  641411 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:33.980436  641411 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:33.987756  641411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:33.991337  641411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:33.991405  641411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:34.025577  641411 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:34.033620  641411 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:34.041719  641411 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:34.049022  641411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:34.052508  641411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:34.052553  641411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:34.087835  641411 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:34.095026  641411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:34.098686  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:37:34.135392  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:37:34.171147  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:37:34.221456  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:37:34.266559  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:37:34.328054  641411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:37:34.389758  641411 kubeadm.go:401] StartCluster: {Name:newest-cni-246956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-246956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:34.390081  641411 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:34.390187  641411 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:34.437285  641411 cri.go:96] found id: "2e65f48ec568aa4a15024b2eea0baac4a75982b1ef72891857cb7ece1b229a93"
	I1227 09:37:34.437317  641411 cri.go:96] found id: "f2b4d1b51b8b5d05a7022c93ea1d77fcf987df7336ffa16aa507218f305194b7"
	I1227 09:37:34.437323  641411 cri.go:96] found id: "5263ceb230522a69b34eb816b6de3b89096d950d9862e603732cbf4a1c75836e"
	I1227 09:37:34.437328  641411 cri.go:96] found id: "270f3f6fdb6aa9105489c1aa87a857c489190a97d3b5dadc4b9357252b50af3a"
	I1227 09:37:34.437332  641411 cri.go:96] found id: ""
	I1227 09:37:34.437382  641411 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:37:34.456922  641411 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:34Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:37:34.457009  641411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:34.473303  641411 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:37:34.473335  641411 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:37:34.473385  641411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:37:34.483013  641411 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:37:34.483924  641411 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-246956" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:34.484378  641411 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-373581/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-246956" cluster setting kubeconfig missing "newest-cni-246956" context setting]
	I1227 09:37:34.485153  641411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:34.487339  641411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:37:34.497863  641411 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 09:37:34.497895  641411 kubeadm.go:602] duration metric: took 24.553545ms to restartPrimaryControlPlane
	I1227 09:37:34.497906  641411 kubeadm.go:403] duration metric: took 108.159942ms to StartCluster
	I1227 09:37:34.497923  641411 settings.go:142] acquiring lock: {Name:mkc867fd794159ff1847b43b60548e454c403aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:34.497980  641411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:34.499333  641411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/kubeconfig: {Name:mkb411689a0952d3f8325ae3ededdb7352d2980c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:34.499604  641411 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:34.499714  641411 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:37:34.499830  641411 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-246956"
	I1227 09:37:34.499847  641411 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-246956"
	W1227 09:37:34.499855  641411 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:37:34.499872  641411 config.go:182] Loaded profile config "newest-cni-246956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:34.499884  641411 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:34.499927  641411 addons.go:70] Setting dashboard=true in profile "newest-cni-246956"
	I1227 09:37:34.499940  641411 addons.go:239] Setting addon dashboard=true in "newest-cni-246956"
	W1227 09:37:34.499949  641411 addons.go:248] addon dashboard should already be in state true
	I1227 09:37:34.499975  641411 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:34.500407  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:34.500508  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:34.500508  641411 addons.go:70] Setting default-storageclass=true in profile "newest-cni-246956"
	I1227 09:37:34.500530  641411 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-246956"
	I1227 09:37:34.500863  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:34.501608  641411 out.go:179] * Verifying Kubernetes components...
	I1227 09:37:34.502594  641411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:34.533328  641411 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:37:34.536455  641411 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 09:37:34.536570  641411 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:34.536581  641411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:37:34.536641  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:34.541757  641411 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1227 09:37:30.537950  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:32.538158  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:34.547357  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	W1227 09:37:31.875863  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	W1227 09:37:34.377714  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	I1227 09:37:34.542155  641411 addons.go:239] Setting addon default-storageclass=true in "newest-cni-246956"
	W1227 09:37:34.542177  641411 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:37:34.542206  641411 host.go:66] Checking if "newest-cni-246956" exists ...
	I1227 09:37:34.542658  641411 cli_runner.go:164] Run: docker container inspect newest-cni-246956 --format={{.State.Status}}
	I1227 09:37:34.543046  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:37:34.543064  641411 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:37:34.543149  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:34.569737  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:34.579505  641411 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:34.579526  641411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:37:34.579588  641411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-246956
	I1227 09:37:34.584102  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:34.608811  641411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/newest-cni-246956/id_rsa Username:docker}
	I1227 09:37:34.685560  641411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:34.696216  641411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:37:34.704691  641411 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:37:34.704856  641411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:37:34.713073  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:37:34.713098  641411 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:37:34.728707  641411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:37:34.731147  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:37:34.731173  641411 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:37:34.765845  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:37:34.765878  641411 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:37:34.789264  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:37:34.789291  641411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:37:34.814432  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:37:34.814462  641411 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:37:34.831380  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:37:34.831417  641411 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:37:34.848477  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:37:34.848504  641411 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:37:34.863996  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:37:34.864027  641411 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:37:34.883520  641411 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:37:34.883549  641411 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:37:34.900847  641411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:37:36.406511  641411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.710183067s)
	I1227 09:37:36.406554  641411 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.701672543s)
	I1227 09:37:36.406594  641411 api_server.go:72] duration metric: took 1.906951366s to wait for apiserver process to appear ...
	I1227 09:37:36.406603  641411 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:37:36.406617  641411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.677865022s)
	I1227 09:37:36.406628  641411 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:37:36.406758  641411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.505868765s)
	I1227 09:37:36.411098  641411 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-246956 addons enable metrics-server
	
	I1227 09:37:36.412244  641411 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:37:36.412262  641411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:37:36.420084  641411 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 09:37:36.421032  641411 addons.go:530] duration metric: took 1.921329634s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 09:37:36.907243  641411 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:37:36.912278  641411 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:37:36.912313  641411 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:37:37.407602  641411 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:37:37.411578  641411 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 09:37:37.412840  641411 api_server.go:141] control plane version: v1.35.0
	I1227 09:37:37.412870  641411 api_server.go:131] duration metric: took 1.006255426s to wait for apiserver health ...
	I1227 09:37:37.412882  641411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:37:37.416623  641411 system_pods.go:59] 8 kube-system pods found
	I1227 09:37:37.416669  641411 system_pods.go:61] "coredns-7d764666f9-kqzph" [cd4faccb-5994-46cb-a83b-d554df2fb8f2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 09:37:37.416685  641411 system_pods.go:61] "etcd-newest-cni-246956" [26721526-906a-4949-a50f-92ea210b80be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:37:37.416700  641411 system_pods.go:61] "kindnet-lmtxw" [e2185b04-5cba-4c54-86e0-9c2515f95074] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 09:37:37.416710  641411 system_pods.go:61] "kube-apiserver-newest-cni-246956" [7e3043fd-edc4-4182-8659-eba54f67a2d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:37:37.416732  641411 system_pods.go:61] "kube-controller-manager-newest-cni-246956" [7a30adc9-ce06-4908-8f0b-ed3da78f6394] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:37:37.416745  641411 system_pods.go:61] "kube-proxy-65ltj" [a1e5773a-e15f-405b-bca5-62a52d6e83a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 09:37:37.416753  641411 system_pods.go:61] "kube-scheduler-newest-cni-246956" [e515cbde-415b-4a69-b0be-a4c87c86858e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:37:37.416760  641411 system_pods.go:61] "storage-provisioner" [0735bc86-6017-4c08-8562-4a36fe686929] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 09:37:37.416771  641411 system_pods.go:74] duration metric: took 3.882996ms to wait for pod list to return data ...
	I1227 09:37:37.416784  641411 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:37:37.419297  641411 default_sa.go:45] found service account: "default"
	I1227 09:37:37.419317  641411 default_sa.go:55] duration metric: took 2.526247ms for default service account to be created ...
	I1227 09:37:37.419328  641411 kubeadm.go:587] duration metric: took 2.919690651s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 09:37:37.419352  641411 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:37:37.421724  641411 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 09:37:37.421754  641411 node_conditions.go:123] node cpu capacity is 8
	I1227 09:37:37.421770  641411 node_conditions.go:105] duration metric: took 2.41226ms to run NodePressure ...
	I1227 09:37:37.421785  641411 start.go:242] waiting for startup goroutines ...
	I1227 09:37:37.421826  641411 start.go:247] waiting for cluster config update ...
	I1227 09:37:37.421844  641411 start.go:256] writing updated cluster config ...
	I1227 09:37:37.422146  641411 ssh_runner.go:195] Run: rm -f paused
	I1227 09:37:37.473689  641411 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:37:37.476027  641411 out.go:179] * Done! kubectl is now configured to use "newest-cni-246956" cluster and "default" namespace by default
	I1227 09:37:33.474697  640477 out.go:252]   - Booting up control plane ...
	I1227 09:37:33.474850  640477 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:37:33.474954  640477 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:37:33.476497  640477 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:37:33.495717  640477 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:37:33.495865  640477 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:37:33.502491  640477 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:37:33.502850  640477 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:37:33.502916  640477 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:37:33.613095  640477 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:37:33.613272  640477 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:37:34.114153  640477 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.316468ms
	I1227 09:37:34.118694  640477 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 09:37:34.118842  640477 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1227 09:37:34.118945  640477 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 09:37:34.119024  640477 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 09:37:35.123740  640477 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004896698s
	I1227 09:37:35.928654  640477 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.809807783s
	I1227 09:37:37.620162  640477 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501336374s
	I1227 09:37:37.636307  640477 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 09:37:37.648028  640477 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 09:37:37.655852  640477 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 09:37:37.656073  640477 kubeadm.go:319] [mark-control-plane] Marking the node auto-157923 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 09:37:37.664167  640477 kubeadm.go:319] [bootstrap-token] Using token: h84766.ayttqn53s0bkjj4i
	I1227 09:37:37.665421  640477 out.go:252]   - Configuring RBAC rules ...
	I1227 09:37:37.665584  640477 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 09:37:37.668276  640477 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 09:37:37.673023  640477 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 09:37:37.675538  640477 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 09:37:37.678012  640477 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 09:37:37.681014  640477 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 09:37:38.026331  640477 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 09:37:38.440680  640477 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 09:37:39.027874  640477 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 09:37:39.029989  640477 kubeadm.go:319] 
	I1227 09:37:39.030077  640477 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 09:37:39.030091  640477 kubeadm.go:319] 
	I1227 09:37:39.030185  640477 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 09:37:39.030196  640477 kubeadm.go:319] 
	I1227 09:37:39.030226  640477 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 09:37:39.030293  640477 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 09:37:39.030352  640477 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 09:37:39.030358  640477 kubeadm.go:319] 
	I1227 09:37:39.030435  640477 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 09:37:39.030441  640477 kubeadm.go:319] 
	I1227 09:37:39.030498  640477 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 09:37:39.030504  640477 kubeadm.go:319] 
	I1227 09:37:39.030560  640477 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 09:37:39.030651  640477 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 09:37:39.030735  640477 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 09:37:39.030742  640477 kubeadm.go:319] 
	I1227 09:37:39.030870  640477 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 09:37:39.030967  640477 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 09:37:39.030972  640477 kubeadm.go:319] 
	I1227 09:37:39.031068  640477 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token h84766.ayttqn53s0bkjj4i \
	I1227 09:37:39.031198  640477 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 \
	I1227 09:37:39.031224  640477 kubeadm.go:319] 	--control-plane 
	I1227 09:37:39.031230  640477 kubeadm.go:319] 
	I1227 09:37:39.032078  640477 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 09:37:39.032096  640477 kubeadm.go:319] 
	I1227 09:37:39.032199  640477 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token h84766.ayttqn53s0bkjj4i \
	I1227 09:37:39.032327  640477 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fdd38f26fbaca57ed47cc723a4a49372698d5a31b243372bf53d5b274efa96f5 
	I1227 09:37:39.037635  640477 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 09:37:39.037837  640477 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:37:39.037892  640477 cni.go:84] Creating CNI manager for ""
	I1227 09:37:39.037903  640477 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:37:39.039169  640477 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1227 09:37:37.038856  629532 pod_ready.go:104] pod "coredns-7d764666f9-wnzhx" is not "Ready", error: <nil>
	I1227 09:37:39.040488  629532 pod_ready.go:94] pod "coredns-7d764666f9-wnzhx" is "Ready"
	I1227 09:37:39.040525  629532 pod_ready.go:86] duration metric: took 38.008133875s for pod "coredns-7d764666f9-wnzhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.043728  629532 pod_ready.go:83] waiting for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.048854  629532 pod_ready.go:94] pod "etcd-no-preload-963457" is "Ready"
	I1227 09:37:39.048879  629532 pod_ready.go:86] duration metric: took 5.124573ms for pod "etcd-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.050919  629532 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.054702  629532 pod_ready.go:94] pod "kube-apiserver-no-preload-963457" is "Ready"
	I1227 09:37:39.054722  629532 pod_ready.go:86] duration metric: took 3.783045ms for pod "kube-apiserver-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.056399  629532 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.237401  629532 pod_ready.go:94] pod "kube-controller-manager-no-preload-963457" is "Ready"
	I1227 09:37:39.237437  629532 pod_ready.go:86] duration metric: took 181.018956ms for pod "kube-controller-manager-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.437496  629532 pod_ready.go:83] waiting for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.838657  629532 pod_ready.go:94] pod "kube-proxy-grkqs" is "Ready"
	I1227 09:37:39.838692  629532 pod_ready.go:86] duration metric: took 401.169709ms for pod "kube-proxy-grkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:40.039092  629532 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:40.436525  629532 pod_ready.go:94] pod "kube-scheduler-no-preload-963457" is "Ready"
	I1227 09:37:40.436550  629532 pod_ready.go:86] duration metric: took 397.425819ms for pod "kube-scheduler-no-preload-963457" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:40.436560  629532 pod_ready.go:40] duration metric: took 39.410112609s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:37:40.484672  629532 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:37:40.485924  629532 out.go:179] * Done! kubectl is now configured to use "no-preload-963457" cluster and "default" namespace by default
	W1227 09:37:36.876426  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	W1227 09:37:38.877418  631392 pod_ready.go:104] pod "coredns-7d764666f9-wfv5r" is not "Ready", error: <nil>
	I1227 09:37:39.875401  631392 pod_ready.go:94] pod "coredns-7d764666f9-wfv5r" is "Ready"
	I1227 09:37:39.875436  631392 pod_ready.go:86] duration metric: took 33.505363567s for pod "coredns-7d764666f9-wfv5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.878038  631392 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.882056  631392 pod_ready.go:94] pod "etcd-default-k8s-diff-port-497722" is "Ready"
	I1227 09:37:39.882085  631392 pod_ready.go:86] duration metric: took 4.025055ms for pod "etcd-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.884050  631392 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.887412  631392 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-497722" is "Ready"
	I1227 09:37:39.887430  631392 pod_ready.go:86] duration metric: took 3.360274ms for pod "kube-apiserver-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:39.889183  631392 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:40.074033  631392 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-497722" is "Ready"
	I1227 09:37:40.074067  631392 pod_ready.go:86] duration metric: took 184.865005ms for pod "kube-controller-manager-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:40.273202  631392 pod_ready.go:83] waiting for pod "kube-proxy-6z4vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:40.674034  631392 pod_ready.go:94] pod "kube-proxy-6z4vt" is "Ready"
	I1227 09:37:40.674062  631392 pod_ready.go:86] duration metric: took 400.830126ms for pod "kube-proxy-6z4vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:40.873230  631392 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:41.273403  631392 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-497722" is "Ready"
	I1227 09:37:41.273431  631392 pod_ready.go:86] duration metric: took 400.174545ms for pod "kube-scheduler-default-k8s-diff-port-497722" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:37:41.273443  631392 pod_ready.go:40] duration metric: took 34.907166469s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:37:41.327174  631392 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 09:37:41.331899  631392 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-497722" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.054714922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.058226195Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a4965bd7-340c-4fe8-a7c9-822559e37ae9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.058617857Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c8e2b7fb-34a9-42e9-a19f-6baa2b2f9f7b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.059918259Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.060566234Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.060874115Z" level=info msg="Ran pod sandbox bdf4b8ed50480bd375dfe42bfe4c4c765aa57352edec78e6c970632ad7589d6d with infra container: kube-system/kube-proxy-65ltj/POD" id=a4965bd7-340c-4fe8-a7c9-822559e37ae9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.061734531Z" level=info msg="Ran pod sandbox 32524c18395e5c8435dc339ef971166ecdb8d7ac0de67d68e2d4915b82189f24 with infra container: kube-system/kindnet-lmtxw/POD" id=c8e2b7fb-34a9-42e9-a19f-6baa2b2f9f7b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.063481263Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=ad8994c7-3512-4255-859c-a3d75f244bf2 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.063887152Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=b9c65461-d492-43c5-8537-8bb16c54e3c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.064416371Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=a21e03e1-19dc-4498-863a-1c294284cd61 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.064808319Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=6e26450f-7192-469c-9367-acb59eb56ca8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.065387161Z" level=info msg="Creating container: kube-system/kube-proxy-65ltj/kube-proxy" id=d13a0345-c46f-47c5-af0b-e2352eeb574c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.065501363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.065740225Z" level=info msg="Creating container: kube-system/kindnet-lmtxw/kindnet-cni" id=da88b3ec-e125-42f8-ba9c-b372af5f0e2b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.065851462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.070493203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.071151535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.072369563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.072916882Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.103159564Z" level=info msg="Created container 90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522: kube-system/kindnet-lmtxw/kindnet-cni" id=da88b3ec-e125-42f8-ba9c-b372af5f0e2b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.103955627Z" level=info msg="Starting container: 90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522" id=e62cb125-e441-48aa-beaa-02a95bbd1a33 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.106278724Z" level=info msg="Started container" PID=1052 containerID=90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522 description=kube-system/kindnet-lmtxw/kindnet-cni id=e62cb125-e441-48aa-beaa-02a95bbd1a33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=32524c18395e5c8435dc339ef971166ecdb8d7ac0de67d68e2d4915b82189f24
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.110094189Z" level=info msg="Created container af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95: kube-system/kube-proxy-65ltj/kube-proxy" id=d13a0345-c46f-47c5-af0b-e2352eeb574c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.110585531Z" level=info msg="Starting container: af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95" id=eefb4a76-0f18-4c33-814f-1d667aa3cca2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:37 newest-cni-246956 crio[520]: time="2025-12-27T09:37:37.113733623Z" level=info msg="Started container" PID=1051 containerID=af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95 description=kube-system/kube-proxy-65ltj/kube-proxy id=eefb4a76-0f18-4c33-814f-1d667aa3cca2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bdf4b8ed50480bd375dfe42bfe4c4c765aa57352edec78e6c970632ad7589d6d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	90d4a673309e9       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   5 seconds ago       Running             kindnet-cni               1                   32524c18395e5       kindnet-lmtxw                               kube-system
	af83311425132       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   5 seconds ago       Running             kube-proxy                1                   bdf4b8ed50480       kube-proxy-65ltj                            kube-system
	2e65f48ec568a       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   8 seconds ago       Running             kube-scheduler            1                   2e72c9f7fc855       kube-scheduler-newest-cni-246956            kube-system
	f2b4d1b51b8b5       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   8 seconds ago       Running             kube-apiserver            1                   29cffb355d019       kube-apiserver-newest-cni-246956            kube-system
	5263ceb230522       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   8 seconds ago       Running             etcd                      1                   7580047475c45       etcd-newest-cni-246956                      kube-system
	270f3f6fdb6aa       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   8 seconds ago       Running             kube-controller-manager   1                   09e80cbef7153       kube-controller-manager-newest-cni-246956   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-246956
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-246956
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=newest-cni-246956
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_37_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:37:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-246956
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:37:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:37:36 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:37:36 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:37:36 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 09:37:36 +0000   Sat, 27 Dec 2025 09:37:03 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-246956
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                0282d3f4-6f31-42cc-85b2-77d015ffb093
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-246956                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-lmtxw                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-246956             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-246956    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-65ltj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-246956             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  31s   node-controller  Node newest-cni-246956 event: Registered Node newest-cni-246956 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-246956 event: Registered Node newest-cni-246956 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [5263ceb230522a69b34eb816b6de3b89096d950d9862e603732cbf4a1c75836e] <==
	{"level":"info","ts":"2025-12-27T09:37:34.348501Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:37:34.348527Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:37:34.348722Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:37:34.348743Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:37:34.349375Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T09:37:34.349459Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T09:37:34.349536Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T09:37:34.739638Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:34.739688Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:34.739782Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:34.739911Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:37:34.739939Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:34.741201Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:34.741300Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:37:34.741345Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:34.741375Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:34.742482Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-246956 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:37:34.742534Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:37:34.742482Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:37:34.743442Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:37:34.743480Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:37:34.743924Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:37:34.745155Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:37:34.747439Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:37:34.749234Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 09:37:43 up  1:20,  0 user,  load average: 3.35, 3.14, 2.38
	Linux newest-cni-246956 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [90d4a673309e96dc2ba67d645eafe93dc5d29a0ecfd31ba0ae335db9ed033522] <==
	I1227 09:37:37.373599       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:37:37.373875       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 09:37:37.374004       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:37:37.374027       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:37:37.374046       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:37:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:37:37.577204       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:37:37.577311       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:37:37.577329       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:37:37.577482       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:37:37.877462       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:37:37.877491       1 metrics.go:72] Registering metrics
	I1227 09:37:37.877597       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [f2b4d1b51b8b5d05a7022c93ea1d77fcf987df7336ffa16aa507218f305194b7] <==
	I1227 09:37:35.922029       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 09:37:35.938671       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:35.939069       1 policy_source.go:248] refreshing policies
	I1227 09:37:35.939262       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:37:35.991048       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 09:37:35.991301       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 09:37:35.991196       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 09:37:35.991166       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 09:37:35.991263       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:37:35.991279       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 09:37:35.999230       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 09:37:36.005336       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:37:36.215497       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:37:36.243568       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:37:36.259137       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:37:36.265243       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:37:36.271838       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:37:36.297896       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.129.221"}
	I1227 09:37:36.306953       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.95.221"}
	I1227 09:37:36.795291       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:37:39.564557       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:37:39.564607       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:37:39.615970       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:37:39.666312       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:37:39.715857       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [270f3f6fdb6aa9105489c1aa87a857c489190a97d3b5dadc4b9357252b50af3a] <==
	I1227 09:37:39.019086       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.018737       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019177       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019183       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019201       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019365       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019380       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019380       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019403       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019436       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019495       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019499       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019533       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019774       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019845       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.019983       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 09:37:39.020082       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-246956"
	I1227 09:37:39.020151       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 09:37:39.020574       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.028033       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:39.034928       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.119323       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:39.119344       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:37:39.119349       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:37:39.128481       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [af83311425132a6a300023f95645d55dec6cac5be7dc8ac1f7c804dee0171a95] <==
	I1227 09:37:37.153344       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:37:37.225876       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:37.326472       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:37.326508       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 09:37:37.326633       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:37:37.348492       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:37:37.348540       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:37:37.354044       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:37:37.354427       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:37:37.354447       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:37:37.356123       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:37:37.356295       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:37:37.356328       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:37:37.356334       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:37:37.356121       1 config.go:200] "Starting service config controller"
	I1227 09:37:37.356348       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:37:37.356491       1 config.go:309] "Starting node config controller"
	I1227 09:37:37.356504       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:37:37.356511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:37:37.456898       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:37:37.456920       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:37:37.456939       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2e65f48ec568aa4a15024b2eea0baac4a75982b1ef72891857cb7ece1b229a93] <==
	I1227 09:37:34.509843       1 serving.go:386] Generated self-signed cert in-memory
	W1227 09:37:35.848838       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:37:35.848880       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:37:35.848895       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:37:35.848904       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:37:35.953531       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 09:37:35.953637       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:37:35.956964       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 09:37:35.957862       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:37:35.957886       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:35.957917       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 09:37:36.058819       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.028136     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-246956\" already exists" pod="kube-system/kube-scheduler-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.028274     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.033958     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-246956\" already exists" pod="kube-system/etcd-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.746382     673 apiserver.go:52] "Watching apiserver"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.750549     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-246956" containerName="kube-controller-manager"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.796315     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.796414     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.796550     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-246956" containerName="kube-apiserver"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.801879     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-246956\" already exists" pod="kube-system/kube-scheduler-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.802090     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-246956" containerName="kube-scheduler"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.802520     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-246956\" already exists" pod="kube-system/etcd-newest-cni-246956"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: E1227 09:37:36.802604     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-246956" containerName="etcd"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.850328     673 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.940640     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1e5773a-e15f-405b-bca5-62a52d6e83a2-lib-modules\") pod \"kube-proxy-65ltj\" (UID: \"a1e5773a-e15f-405b-bca5-62a52d6e83a2\") " pod="kube-system/kube-proxy-65ltj"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.940708     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2185b04-5cba-4c54-86e0-9c2515f95074-xtables-lock\") pod \"kindnet-lmtxw\" (UID: \"e2185b04-5cba-4c54-86e0-9c2515f95074\") " pod="kube-system/kindnet-lmtxw"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.940745     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1e5773a-e15f-405b-bca5-62a52d6e83a2-xtables-lock\") pod \"kube-proxy-65ltj\" (UID: \"a1e5773a-e15f-405b-bca5-62a52d6e83a2\") " pod="kube-system/kube-proxy-65ltj"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.940764     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2185b04-5cba-4c54-86e0-9c2515f95074-lib-modules\") pod \"kindnet-lmtxw\" (UID: \"e2185b04-5cba-4c54-86e0-9c2515f95074\") " pod="kube-system/kindnet-lmtxw"
	Dec 27 09:37:36 newest-cni-246956 kubelet[673]: I1227 09:37:36.940827     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e2185b04-5cba-4c54-86e0-9c2515f95074-cni-cfg\") pod \"kindnet-lmtxw\" (UID: \"e2185b04-5cba-4c54-86e0-9c2515f95074\") " pod="kube-system/kindnet-lmtxw"
	Dec 27 09:37:37 newest-cni-246956 kubelet[673]: E1227 09:37:37.345839     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-246956" containerName="kube-controller-manager"
	Dec 27 09:37:37 newest-cni-246956 kubelet[673]: E1227 09:37:37.802817     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-246956" containerName="etcd"
	Dec 27 09:37:37 newest-cni-246956 kubelet[673]: E1227 09:37:37.802998     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-246956" containerName="kube-scheduler"
	Dec 27 09:37:38 newest-cni-246956 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:37:38 newest-cni-246956 kubelet[673]: I1227 09:37:38.527957     673 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 09:37:38 newest-cni-246956 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:37:38 newest-cni-246956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-246956 -n newest-cni-246956
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-246956 -n newest-cni-246956: exit status 2 (331.655981ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-246956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-kqzph storage-provisioner dashboard-metrics-scraper-867fb5f87b-9t9n8 kubernetes-dashboard-b84665fb8-9j86m
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-246956 describe pod coredns-7d764666f9-kqzph storage-provisioner dashboard-metrics-scraper-867fb5f87b-9t9n8 kubernetes-dashboard-b84665fb8-9j86m
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-246956 describe pod coredns-7d764666f9-kqzph storage-provisioner dashboard-metrics-scraper-867fb5f87b-9t9n8 kubernetes-dashboard-b84665fb8-9j86m: exit status 1 (72.607191ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-kqzph" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-9t9n8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-9j86m" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-246956 describe pod coredns-7d764666f9-kqzph storage-provisioner dashboard-metrics-scraper-867fb5f87b-9t9n8 kubernetes-dashboard-b84665fb8-9j86m: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-963457 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-963457 --alsologtostderr -v=1: exit status 80 (1.785691511s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-963457 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:37:52.272784  649115 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:52.273103  649115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:52.273118  649115 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:52.273123  649115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:52.273338  649115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:37:52.273654  649115 out.go:368] Setting JSON to false
	I1227 09:37:52.273678  649115 mustload.go:66] Loading cluster: no-preload-963457
	I1227 09:37:52.274166  649115 config.go:182] Loaded profile config "no-preload-963457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:52.274735  649115 cli_runner.go:164] Run: docker container inspect no-preload-963457 --format={{.State.Status}}
	I1227 09:37:52.295410  649115 host.go:66] Checking if "no-preload-963457" exists ...
	I1227 09:37:52.295677  649115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:52.357701  649115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-27 09:37:52.347105748 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:52.358608  649115 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-963457 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 09:37:52.360376  649115 out.go:179] * Pausing node no-preload-963457 ... 
	I1227 09:37:52.361305  649115 host.go:66] Checking if "no-preload-963457" exists ...
	I1227 09:37:52.361576  649115 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:52.361629  649115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-963457
	I1227 09:37:52.381396  649115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/no-preload-963457/id_rsa Username:docker}
	I1227 09:37:52.475042  649115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:52.489652  649115 pause.go:52] kubelet running: true
	I1227 09:37:52.489729  649115 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:52.659670  649115 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:52.659809  649115 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:52.733039  649115 cri.go:96] found id: "65af506e89098a5c5b4ffc8cff2878629f11900d58c4ff5309b81a9befadadd5"
	I1227 09:37:52.733065  649115 cri.go:96] found id: "14a1ce99783ca90e3f5c758ff748c1c1f76f41c8636b97b01bcf78d741646fd6"
	I1227 09:37:52.733071  649115 cri.go:96] found id: "277a270b3acbca9938619dd72bc982e6116059d92f6771afd9e2f497d7dd77b4"
	I1227 09:37:52.733076  649115 cri.go:96] found id: "eb5602ad87854ef23de99c48a34f5a0d1c623cb078e609a2a4b567f1da38de9e"
	I1227 09:37:52.733080  649115 cri.go:96] found id: "90f6040bd590bec7f0928127c1409b1a37c5c1c010abbf933230ee623c4fceca"
	I1227 09:37:52.733085  649115 cri.go:96] found id: "3e2d56e4ec07d845c73c075df87007dd294313f8f25d93e2e062eae21343461c"
	I1227 09:37:52.733089  649115 cri.go:96] found id: "0edde0ef003566fce4556ceb0a3d7cbc04d2cd6685f5afd803595f3ababb1338"
	I1227 09:37:52.733093  649115 cri.go:96] found id: "03b54f84cfa7e0b506b3122cd323cd0db22c3c4310cfedd9769eeb770ec9a426"
	I1227 09:37:52.733098  649115 cri.go:96] found id: "716a7952d1fa9945a526436df75297cbf883fb889ba62f53b3ae1e94790bfeaa"
	I1227 09:37:52.733119  649115 cri.go:96] found id: "32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50"
	I1227 09:37:52.733125  649115 cri.go:96] found id: "86f0772760e7d28ea35556b6cafec992e2b1fa6a83846afaed7db6f327a4aed4"
	I1227 09:37:52.733134  649115 cri.go:96] found id: ""
	I1227 09:37:52.733176  649115 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:52.747373  649115 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:52Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:37:52.947806  649115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:52.960536  649115 pause.go:52] kubelet running: false
	I1227 09:37:52.960620  649115 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:53.115364  649115 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:53.115469  649115 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:53.192159  649115 cri.go:96] found id: "65af506e89098a5c5b4ffc8cff2878629f11900d58c4ff5309b81a9befadadd5"
	I1227 09:37:53.192183  649115 cri.go:96] found id: "14a1ce99783ca90e3f5c758ff748c1c1f76f41c8636b97b01bcf78d741646fd6"
	I1227 09:37:53.192190  649115 cri.go:96] found id: "277a270b3acbca9938619dd72bc982e6116059d92f6771afd9e2f497d7dd77b4"
	I1227 09:37:53.192196  649115 cri.go:96] found id: "eb5602ad87854ef23de99c48a34f5a0d1c623cb078e609a2a4b567f1da38de9e"
	I1227 09:37:53.192201  649115 cri.go:96] found id: "90f6040bd590bec7f0928127c1409b1a37c5c1c010abbf933230ee623c4fceca"
	I1227 09:37:53.192207  649115 cri.go:96] found id: "3e2d56e4ec07d845c73c075df87007dd294313f8f25d93e2e062eae21343461c"
	I1227 09:37:53.192213  649115 cri.go:96] found id: "0edde0ef003566fce4556ceb0a3d7cbc04d2cd6685f5afd803595f3ababb1338"
	I1227 09:37:53.192217  649115 cri.go:96] found id: "03b54f84cfa7e0b506b3122cd323cd0db22c3c4310cfedd9769eeb770ec9a426"
	I1227 09:37:53.192230  649115 cri.go:96] found id: "716a7952d1fa9945a526436df75297cbf883fb889ba62f53b3ae1e94790bfeaa"
	I1227 09:37:53.192260  649115 cri.go:96] found id: "32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50"
	I1227 09:37:53.192272  649115 cri.go:96] found id: "86f0772760e7d28ea35556b6cafec992e2b1fa6a83846afaed7db6f327a4aed4"
	I1227 09:37:53.192278  649115 cri.go:96] found id: ""
	I1227 09:37:53.192333  649115 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:53.714407  649115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:53.727498  649115 pause.go:52] kubelet running: false
	I1227 09:37:53.727564  649115 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:53.902197  649115 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:53.902282  649115 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:53.973406  649115 cri.go:96] found id: "65af506e89098a5c5b4ffc8cff2878629f11900d58c4ff5309b81a9befadadd5"
	I1227 09:37:53.973433  649115 cri.go:96] found id: "14a1ce99783ca90e3f5c758ff748c1c1f76f41c8636b97b01bcf78d741646fd6"
	I1227 09:37:53.973438  649115 cri.go:96] found id: "277a270b3acbca9938619dd72bc982e6116059d92f6771afd9e2f497d7dd77b4"
	I1227 09:37:53.973443  649115 cri.go:96] found id: "eb5602ad87854ef23de99c48a34f5a0d1c623cb078e609a2a4b567f1da38de9e"
	I1227 09:37:53.973448  649115 cri.go:96] found id: "90f6040bd590bec7f0928127c1409b1a37c5c1c010abbf933230ee623c4fceca"
	I1227 09:37:53.973452  649115 cri.go:96] found id: "3e2d56e4ec07d845c73c075df87007dd294313f8f25d93e2e062eae21343461c"
	I1227 09:37:53.973457  649115 cri.go:96] found id: "0edde0ef003566fce4556ceb0a3d7cbc04d2cd6685f5afd803595f3ababb1338"
	I1227 09:37:53.973461  649115 cri.go:96] found id: "03b54f84cfa7e0b506b3122cd323cd0db22c3c4310cfedd9769eeb770ec9a426"
	I1227 09:37:53.973465  649115 cri.go:96] found id: "716a7952d1fa9945a526436df75297cbf883fb889ba62f53b3ae1e94790bfeaa"
	I1227 09:37:53.973472  649115 cri.go:96] found id: "32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50"
	I1227 09:37:53.973477  649115 cri.go:96] found id: "86f0772760e7d28ea35556b6cafec992e2b1fa6a83846afaed7db6f327a4aed4"
	I1227 09:37:53.973485  649115 cri.go:96] found id: ""
	I1227 09:37:53.973530  649115 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:53.991813  649115 out.go:203] 
	W1227 09:37:53.992935  649115 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:37:53.992957  649115 out.go:285] * 
	* 
	W1227 09:37:53.995268  649115 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:37:53.996366  649115 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-963457 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-963457
helpers_test.go:244: (dbg) docker inspect no-preload-963457:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177",
	        "Created": "2025-12-27T09:35:31.385556523Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 629770,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:36:49.795104128Z",
	            "FinishedAt": "2025-12-27T09:36:48.916499117Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/hosts",
	        "LogPath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177-json.log",
	        "Name": "/no-preload-963457",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-963457:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-963457",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177",
	                "LowerDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-963457",
	                "Source": "/var/lib/docker/volumes/no-preload-963457/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-963457",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-963457",
	                "name.minikube.sigs.k8s.io": "no-preload-963457",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bf235d27c31ce3c0429809fa70bd10057a327cd3027101e0c55ce1b1aa16f7e8",
	            "SandboxKey": "/var/run/docker/netns/bf235d27c31c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-963457": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e27fce9ec482a6f231f0cd34fc8f67937ff2dfde3915e36e90c5e0b4fd43cbe7",
	                    "EndpointID": "39b0495ba09f2a4dab9cd7a02bcad3e1adea35a5862dd5befdaef68af2353f1b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "a6:fc:de:fe:64:7a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-963457",
	                        "0e530c327725"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963457 -n no-preload-963457
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963457 -n no-preload-963457: exit status 2 (345.320859ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-963457 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-963457 logs -n 25: (1.181081721s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p no-preload-963457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-497722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ image   │ embed-certs-912564 image list --format=json                                                                                                                                                                                                   │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p embed-certs-912564 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-246956 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p auto-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-157923                  │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-246956 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ image   │ newest-cni-246956 image list --format=json                                                                                                                                                                                                    │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p newest-cni-246956 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ delete  │ -p newest-cni-246956                                                                                                                                                                                                                          │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p newest-cni-246956                                                                                                                                                                                                                          │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p kindnet-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-157923               │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ image   │ no-preload-963457 image list --format=json                                                                                                                                                                                                    │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p no-preload-963457 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ image   │ default-k8s-diff-port-497722 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p default-k8s-diff-port-497722 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:37:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:37:46.864671  647990 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:46.864827  647990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:46.864841  647990 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:46.864848  647990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:46.865083  647990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:37:46.865565  647990 out.go:368] Setting JSON to false
	I1227 09:37:46.866843  647990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4811,"bootTime":1766823456,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:37:46.866917  647990 start.go:143] virtualization: kvm guest
	I1227 09:37:46.868763  647990 out.go:179] * [kindnet-157923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:37:46.869915  647990 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:37:46.869935  647990 notify.go:221] Checking for updates...
	I1227 09:37:46.872147  647990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:37:46.873206  647990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:46.874198  647990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:37:46.875076  647990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:37:46.876025  647990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:37:46.877293  647990 config.go:182] Loaded profile config "auto-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877377  647990 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877469  647990 config.go:182] Loaded profile config "no-preload-963457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877582  647990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:37:46.900930  647990 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:37:46.901023  647990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:46.956322  647990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:37:46.946538862 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:46.956429  647990 docker.go:319] overlay module found
	I1227 09:37:46.957771  647990 out.go:179] * Using the docker driver based on user configuration
	I1227 09:37:46.958739  647990 start.go:309] selected driver: docker
	I1227 09:37:46.958754  647990 start.go:928] validating driver "docker" against <nil>
	I1227 09:37:46.958765  647990 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:37:46.959277  647990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:47.014097  647990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:37:47.004076494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:47.014262  647990 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:37:47.014493  647990 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:37:47.015832  647990 out.go:179] * Using Docker driver with root privileges
	I1227 09:37:47.016778  647990 cni.go:84] Creating CNI manager for "kindnet"
	I1227 09:37:47.016807  647990 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:37:47.016889  647990 start.go:353] cluster config:
	{Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:47.018032  647990 out.go:179] * Starting "kindnet-157923" primary control-plane node in "kindnet-157923" cluster
	I1227 09:37:47.018884  647990 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:37:47.019947  647990 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:37:47.021519  647990 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:47.021557  647990 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:37:47.021566  647990 cache.go:65] Caching tarball of preloaded images
	I1227 09:37:47.021665  647990 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:37:47.021690  647990 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:37:47.021942  647990 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:37:47.022225  647990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/config.json ...
	I1227 09:37:47.022262  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/config.json: {Name:mkc9489786022a3c521e082ba47d43b09ee5c209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:47.043904  647990 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:37:47.043922  647990 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:37:47.043937  647990 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:37:47.043970  647990 start.go:360] acquireMachinesLock for kindnet-157923: {Name:mk5cf38a4c59f5d9a1319baf127d324f7051b88d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:37:47.044057  647990 start.go:364] duration metric: took 73.03µs to acquireMachinesLock for "kindnet-157923"
	I1227 09:37:47.044079  647990 start.go:93] Provisioning new machine with config: &{Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:47.044147  647990 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:37:44.467035  640477 addons.go:530] duration metric: took 525.080113ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 09:37:44.738293  640477 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-157923" context rescaled to 1 replicas
	W1227 09:37:46.238781  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	W1227 09:37:48.238941  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	I1227 09:37:47.045517  647990 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:37:47.045710  647990 start.go:159] libmachine.API.Create for "kindnet-157923" (driver="docker")
	I1227 09:37:47.045736  647990 client.go:173] LocalClient.Create starting
	I1227 09:37:47.045836  647990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:37:47.045874  647990 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:47.045891  647990 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:47.045952  647990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:37:47.045975  647990 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:47.045984  647990 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:47.046348  647990 cli_runner.go:164] Run: docker network inspect kindnet-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:37:47.063570  647990 cli_runner.go:211] docker network inspect kindnet-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:37:47.063648  647990 network_create.go:284] running [docker network inspect kindnet-157923] to gather additional debugging logs...
	I1227 09:37:47.063669  647990 cli_runner.go:164] Run: docker network inspect kindnet-157923
	W1227 09:37:47.079738  647990 cli_runner.go:211] docker network inspect kindnet-157923 returned with exit code 1
	I1227 09:37:47.079765  647990 network_create.go:287] error running [docker network inspect kindnet-157923]: docker network inspect kindnet-157923: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-157923 not found
	I1227 09:37:47.079776  647990 network_create.go:289] output of [docker network inspect kindnet-157923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-157923 not found
	
	** /stderr **
	I1227 09:37:47.079880  647990 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:47.096505  647990 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
	I1227 09:37:47.097278  647990 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-21a699476be6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:e8:d9:95:e6:36} reservation:<nil>}
	I1227 09:37:47.097729  647990 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e97c5356905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:d9:6b:42:f5:e3} reservation:<nil>}
	I1227 09:37:47.098498  647990 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d83dd0}
	I1227 09:37:47.098520  647990 network_create.go:124] attempt to create docker network kindnet-157923 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:37:47.098594  647990 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-157923 kindnet-157923
	I1227 09:37:47.148980  647990 network_create.go:108] docker network kindnet-157923 192.168.76.0/24 created
	I1227 09:37:47.149015  647990 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-157923" container
	I1227 09:37:47.149100  647990 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:37:47.166970  647990 cli_runner.go:164] Run: docker volume create kindnet-157923 --label name.minikube.sigs.k8s.io=kindnet-157923 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:37:47.185066  647990 oci.go:103] Successfully created a docker volume kindnet-157923
	I1227 09:37:47.185135  647990 cli_runner.go:164] Run: docker run --rm --name kindnet-157923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-157923 --entrypoint /usr/bin/test -v kindnet-157923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:37:47.721086  647990 oci.go:107] Successfully prepared a docker volume kindnet-157923
	I1227 09:37:47.721191  647990 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:47.721212  647990 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:37:47.721327  647990 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:37:51.492743  647990 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.771349551s)
	I1227 09:37:51.492802  647990 kic.go:203] duration metric: took 3.771572095s to extract preloaded images to volume ...
	W1227 09:37:51.492907  647990 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:37:51.492986  647990 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:37:51.493040  647990 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:37:51.548737  647990 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-157923 --name kindnet-157923 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-157923 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-157923 --network kindnet-157923 --ip 192.168.76.2 --volume kindnet-157923:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:37:51.804564  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Running}}
	I1227 09:37:51.823570  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Status}}
	I1227 09:37:51.841753  647990 cli_runner.go:164] Run: docker exec kindnet-157923 stat /var/lib/dpkg/alternatives/iptables
	W1227 09:37:50.284486  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	W1227 09:37:52.739006  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 27 09:37:18 no-preload-963457 crio[561]: time="2025-12-27T09:37:18.883350504Z" level=info msg="Started container" PID=1731 containerID=376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper id=3b442bef-5843-46d9-aa14-36b3ed51e206 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5531d82b938e4bbdce99428fb11b73138b7b7ef0e0991bc8ba4569f8d44ba308
	Dec 27 09:37:19 no-preload-963457 crio[561]: time="2025-12-27T09:37:19.945300233Z" level=info msg="Removing container: 6abd451748c9f46e280949ff15c32d7be9f0710b8cb9dacb9c97ed3250d82605" id=288d8cac-92eb-4022-8636-1e4519f70448 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:19 no-preload-963457 crio[561]: time="2025-12-27T09:37:19.956017329Z" level=info msg="Removed container 6abd451748c9f46e280949ff15c32d7be9f0710b8cb9dacb9c97ed3250d82605: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper" id=288d8cac-92eb-4022-8636-1e4519f70448 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.97348749Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1b6422ad-8a72-4f2e-a54b-b3edd41fb1a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.974423517Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b27886af-970f-4c94-9393-0bc712c44bce name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.975479376Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bf58aa41-fd05-4d3b-b56e-423a207f890c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.975622171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.980688395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.980941496Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5790fade8fbcd4a49b64d2a18c566f337f34830cafd6d6cfa9012335083d4d6b/merged/etc/passwd: no such file or directory"
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.980978355Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5790fade8fbcd4a49b64d2a18c566f337f34830cafd6d6cfa9012335083d4d6b/merged/etc/group: no such file or directory"
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.981836292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:31 no-preload-963457 crio[561]: time="2025-12-27T09:37:31.006712205Z" level=info msg="Created container 65af506e89098a5c5b4ffc8cff2878629f11900d58c4ff5309b81a9befadadd5: kube-system/storage-provisioner/storage-provisioner" id=bf58aa41-fd05-4d3b-b56e-423a207f890c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:31 no-preload-963457 crio[561]: time="2025-12-27T09:37:31.007389521Z" level=info msg="Starting container: 65af506e89098a5c5b4ffc8cff2878629f11900d58c4ff5309b81a9befadadd5" id=b8e1370e-a61b-4adf-97b4-da9c966d944f name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:31 no-preload-963457 crio[561]: time="2025-12-27T09:37:31.00958705Z" level=info msg="Started container" PID=1749 containerID=65af506e89098a5c5b4ffc8cff2878629f11900d58c4ff5309b81a9befadadd5 description=kube-system/storage-provisioner/storage-provisioner id=b8e1370e-a61b-4adf-97b4-da9c966d944f name=/runtime.v1.RuntimeService/StartContainer sandboxID=843b9ed615caba93a86db4ecf531f2b6c3207b997d5a776c5c8d88f51e0a2284
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.839462007Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1b735ef1-b599-4c19-b8f1-5cb973703b99 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.840374829Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b1de8586-431f-405f-a0f4-7633f7182ffd name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.841458793Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper" id=7a41a8a6-4e21-41b7-887e-a65a065240ba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.841611144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.847594353Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.848098641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.878056651Z" level=info msg="Created container 32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper" id=7a41a8a6-4e21-41b7-887e-a65a065240ba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.879158967Z" level=info msg="Starting container: 32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50" id=abf41846-1d47-46c0-a1f8-6cbb0a240014 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.882360686Z" level=info msg="Started container" PID=1789 containerID=32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper id=abf41846-1d47-46c0-a1f8-6cbb0a240014 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5531d82b938e4bbdce99428fb11b73138b7b7ef0e0991bc8ba4569f8d44ba308
	Dec 27 09:37:42 no-preload-963457 crio[561]: time="2025-12-27T09:37:42.009727423Z" level=info msg="Removing container: 376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855" id=e1af7286-3667-4bea-9eaa-72be4610f17d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:42 no-preload-963457 crio[561]: time="2025-12-27T09:37:42.019203402Z" level=info msg="Removed container 376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper" id=e1af7286-3667-4bea-9eaa-72be4610f17d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	32f303997e3c2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   3                   5531d82b938e4       dashboard-metrics-scraper-867fb5f87b-qj7z2   kubernetes-dashboard
	65af506e89098       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   843b9ed615cab       storage-provisioner                          kube-system
	86f0772760e7d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   a8768ad8791de       kubernetes-dashboard-b84665fb8-hlxhq         kubernetes-dashboard
	bd093986aac0e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   6c3aa73e31760       busybox                                      default
	14a1ce99783ca       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     0                   25be004bfa0c9       coredns-7d764666f9-wnzhx                     kube-system
	277a270b3acbc       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           54 seconds ago      Running             kube-proxy                  0                   dfab418fb46c1       kube-proxy-grkqs                             kube-system
	eb5602ad87854       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   843b9ed615cab       storage-provisioner                          kube-system
	90f6040bd590b       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   853e671abc275       kindnet-7kw8b                                kube-system
	3e2d56e4ec07d       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           56 seconds ago      Running             kube-apiserver              0                   5ba47ece9636d       kube-apiserver-no-preload-963457             kube-system
	0edde0ef00356       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           56 seconds ago      Running             kube-scheduler              0                   d30d42f94dcae       kube-scheduler-no-preload-963457             kube-system
	03b54f84cfa7e       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           56 seconds ago      Running             kube-controller-manager     0                   1d3bd7cfc1412       kube-controller-manager-no-preload-963457    kube-system
	716a7952d1fa9       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           56 seconds ago      Running             etcd                        0                   e51ec478886f5       etcd-no-preload-963457                       kube-system
	
	
	==> coredns [14a1ce99783ca90e3f5c758ff748c1c1f76f41c8636b97b01bcf78d741646fd6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59521 - 40485 "HINFO IN 4872048296174984903.8139166939014044202. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024238943s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-963457
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-963457
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=no-preload-963457
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_35_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:35:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-963457
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:37:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:37:51 +0000   Sat, 27 Dec 2025 09:35:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:37:51 +0000   Sat, 27 Dec 2025 09:35:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:37:51 +0000   Sat, 27 Dec 2025 09:35:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:37:51 +0000   Sat, 27 Dec 2025 09:36:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-963457
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                d821149c-44f6-4337-913a-683907f0e23a
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-7d764666f9-wnzhx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-no-preload-963457                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-7kw8b                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-963457              250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-963457     200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-grkqs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-963457              100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-qj7z2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-hlxhq          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  114s  node-controller  Node no-preload-963457 event: Registered Node no-preload-963457 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-963457 event: Registered Node no-preload-963457 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [716a7952d1fa9945a526436df75297cbf883fb889ba62f53b3ae1e94790bfeaa] <==
	{"level":"info","ts":"2025-12-27T09:36:58.407046Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T09:36:58.407090Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T09:36:58.407105Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T09:36:58.407137Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T09:36:58.695608Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:58.695653Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:58.695715Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:58.695727Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:36:58.695741Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:58.696308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:58.696355Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:36:58.696378Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:58.696388Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:58.697027Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-963457 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:36:58.697068Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:36:58.697040Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:36:58.697246Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:36:58.697295Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:36:58.698448Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:36:58.698513Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:36:58.701446Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T09:36:58.702449Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:37:50.589404Z","caller":"traceutil/trace.go:172","msg":"trace[463382398] transaction","detail":"{read_only:false; response_revision:697; number_of_response:1; }","duration":"148.322536ms","start":"2025-12-27T09:37:50.441006Z","end":"2025-12-27T09:37:50.589329Z","steps":["trace[463382398] 'process raft request'  (duration: 148.191312ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:37:51.257511Z","caller":"traceutil/trace.go:172","msg":"trace[642364407] transaction","detail":"{read_only:false; response_revision:700; number_of_response:1; }","duration":"123.08666ms","start":"2025-12-27T09:37:51.134408Z","end":"2025-12-27T09:37:51.257495Z","steps":["trace[642364407] 'process raft request'  (duration: 119.449026ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:37:51.335047Z","caller":"traceutil/trace.go:172","msg":"trace[152291325] transaction","detail":"{read_only:false; response_revision:701; number_of_response:1; }","duration":"172.565543ms","start":"2025-12-27T09:37:51.162465Z","end":"2025-12-27T09:37:51.335031Z","steps":["trace[152291325] 'process raft request'  (duration: 172.47348ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:37:55 up  1:20,  0 user,  load average: 3.07, 3.08, 2.37
	Linux no-preload-963457 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [90f6040bd590bec7f0928127c1409b1a37c5c1c010abbf933230ee623c4fceca] <==
	I1227 09:37:00.498875       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:37:00.499137       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 09:37:00.499329       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:37:00.499358       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:37:00.499385       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:37:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:37:00.701034       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:37:00.701066       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:37:00.701079       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:37:00.701538       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:37:01.201656       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:37:01.201688       1 metrics.go:72] Registering metrics
	I1227 09:37:01.201730       1 controller.go:711] "Syncing nftables rules"
	I1227 09:37:10.701735       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:37:10.701822       1 main.go:301] handling current node
	I1227 09:37:20.701717       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:37:20.701752       1 main.go:301] handling current node
	I1227 09:37:30.701288       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:37:30.701335       1 main.go:301] handling current node
	I1227 09:37:40.701912       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:37:40.701954       1 main.go:301] handling current node
	I1227 09:37:50.706257       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:37:50.706292       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e2d56e4ec07d845c73c075df87007dd294313f8f25d93e2e062eae21343461c] <==
	I1227 09:36:59.812090       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 09:36:59.812097       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 09:36:59.812103       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:36:59.812204       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 09:36:59.812217       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 09:36:59.812363       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 09:36:59.813186       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 09:36:59.813315       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 09:36:59.820940       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:36:59.828846       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 09:36:59.829648       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:36:59.838572       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1227 09:36:59.844758       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 09:36:59.880380       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 09:36:59.932597       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:37:00.238449       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:37:00.291099       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:37:00.310651       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:37:00.318367       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:37:00.361663       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.170.64"}
	I1227 09:37:00.371947       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.146.192"}
	I1227 09:37:00.716096       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:37:03.387192       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:37:03.543572       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:37:03.592299       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [03b54f84cfa7e0b506b3122cd323cd0db22c3c4310cfedd9769eeb770ec9a426] <==
	I1227 09:37:02.939621       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.939622       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.939630       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941405       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941406       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941432       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941444       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941460       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941470       1 range_allocator.go:177] "Sending events to api server"
	I1227 09:37:02.941505       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 09:37:02.941509       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:02.941512       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941521       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941534       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941574       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941603       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941638       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941658       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941695       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.949360       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:02.949982       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:03.044972       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:03.044991       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:37:03.044996       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:37:03.050372       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [277a270b3acbca9938619dd72bc982e6116059d92f6771afd9e2f497d7dd77b4] <==
	I1227 09:37:00.299000       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:37:00.368169       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:00.469062       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:00.469092       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 09:37:00.469623       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:37:00.495891       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:37:00.495951       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:37:00.502697       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:37:00.503127       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:37:00.503159       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:37:00.504465       1 config.go:200] "Starting service config controller"
	I1227 09:37:00.504533       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:37:00.504587       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:37:00.504612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:37:00.504652       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:37:00.504685       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:37:00.505177       1 config.go:309] "Starting node config controller"
	I1227 09:37:00.505200       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:37:00.505209       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:37:00.605174       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:37:00.605211       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 09:37:00.605210       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0edde0ef003566fce4556ceb0a3d7cbc04d2cd6685f5afd803595f3ababb1338] <==
	I1227 09:36:58.632312       1 serving.go:386] Generated self-signed cert in-memory
	W1227 09:36:59.732558       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:36:59.732595       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:36:59.732605       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:36:59.732615       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:36:59.788632       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 09:36:59.788685       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:36:59.794043       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 09:36:59.794589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:36:59.799902       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:59.795642       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 09:36:59.900403       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:37:18 no-preload-963457 kubelet[714]: E1227 09:37:18.939553     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:19 no-preload-963457 kubelet[714]: I1227 09:37:19.020121     714 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" podStartSLOduration=1.211529696 podStartE2EDuration="16.02010183s" podCreationTimestamp="2025-12-27 09:37:03 +0000 UTC" firstStartedPulling="2025-12-27 09:37:04.032342742 +0000 UTC m=+6.287426508" lastFinishedPulling="2025-12-27 09:37:18.840914887 +0000 UTC m=+21.095998642" observedRunningTime="2025-12-27 09:37:19.019964909 +0000 UTC m=+21.275048685" watchObservedRunningTime="2025-12-27 09:37:19.02010183 +0000 UTC m=+21.275185608"
	Dec 27 09:37:19 no-preload-963457 kubelet[714]: I1227 09:37:19.943948     714 scope.go:122] "RemoveContainer" containerID="6abd451748c9f46e280949ff15c32d7be9f0710b8cb9dacb9c97ed3250d82605"
	Dec 27 09:37:19 no-preload-963457 kubelet[714]: E1227 09:37:19.944112     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:19 no-preload-963457 kubelet[714]: I1227 09:37:19.944135     714 scope.go:122] "RemoveContainer" containerID="376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855"
	Dec 27 09:37:19 no-preload-963457 kubelet[714]: E1227 09:37:19.944322     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qj7z2_kubernetes-dashboard(0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" podUID="0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa"
	Dec 27 09:37:24 no-preload-963457 kubelet[714]: E1227 09:37:24.248588     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:24 no-preload-963457 kubelet[714]: I1227 09:37:24.248650     714 scope.go:122] "RemoveContainer" containerID="376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855"
	Dec 27 09:37:24 no-preload-963457 kubelet[714]: E1227 09:37:24.248965     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qj7z2_kubernetes-dashboard(0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" podUID="0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa"
	Dec 27 09:37:30 no-preload-963457 kubelet[714]: I1227 09:37:30.973012     714 scope.go:122] "RemoveContainer" containerID="eb5602ad87854ef23de99c48a34f5a0d1c623cb078e609a2a4b567f1da38de9e"
	Dec 27 09:37:38 no-preload-963457 kubelet[714]: E1227 09:37:38.654243     714 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wnzhx" containerName="coredns"
	Dec 27 09:37:41 no-preload-963457 kubelet[714]: E1227 09:37:41.838997     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:41 no-preload-963457 kubelet[714]: I1227 09:37:41.839032     714 scope.go:122] "RemoveContainer" containerID="376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855"
	Dec 27 09:37:42 no-preload-963457 kubelet[714]: I1227 09:37:42.006739     714 scope.go:122] "RemoveContainer" containerID="376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855"
	Dec 27 09:37:42 no-preload-963457 kubelet[714]: E1227 09:37:42.006960     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:42 no-preload-963457 kubelet[714]: I1227 09:37:42.006992     714 scope.go:122] "RemoveContainer" containerID="32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50"
	Dec 27 09:37:42 no-preload-963457 kubelet[714]: E1227 09:37:42.007186     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qj7z2_kubernetes-dashboard(0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" podUID="0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa"
	Dec 27 09:37:44 no-preload-963457 kubelet[714]: E1227 09:37:44.248497     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:44 no-preload-963457 kubelet[714]: I1227 09:37:44.248539     714 scope.go:122] "RemoveContainer" containerID="32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50"
	Dec 27 09:37:44 no-preload-963457 kubelet[714]: E1227 09:37:44.248700     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qj7z2_kubernetes-dashboard(0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" podUID="0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa"
	Dec 27 09:37:52 no-preload-963457 kubelet[714]: I1227 09:37:52.637980     714 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 09:37:52 no-preload-963457 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:37:52 no-preload-963457 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:37:52 no-preload-963457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:37:52 no-preload-963457 systemd[1]: kubelet.service: Consumed 1.839s CPU time.
	
	
	==> kubernetes-dashboard [86f0772760e7d28ea35556b6cafec992e2b1fa6a83846afaed7db6f327a4aed4] <==
	2025/12/27 09:37:11 Starting overwatch
	2025/12/27 09:37:11 Using namespace: kubernetes-dashboard
	2025/12/27 09:37:11 Using in-cluster config to connect to apiserver
	2025/12/27 09:37:11 Using secret token for csrf signing
	2025/12/27 09:37:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 09:37:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 09:37:11 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 09:37:11 Generating JWE encryption key
	2025/12/27 09:37:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 09:37:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 09:37:12 Initializing JWE encryption key from synchronized object
	2025/12/27 09:37:12 Creating in-cluster Sidecar client
	2025/12/27 09:37:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 09:37:12 Serving insecurely on HTTP port: 9090
	2025/12/27 09:37:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [65af506e89098a5c5b4ffc8cff2878629f11900d58c4ff5309b81a9befadadd5] <==
	I1227 09:37:31.023586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:37:31.031275       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:37:31.031331       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 09:37:31.033481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:34.489490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:38.749718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:42.349154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:45.403093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:48.425470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:48.429560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:48.429706       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:37:48.429862       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe1a3181-a427-43a3-94cb-fd67a4c65111", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-963457_65093ed9-f9f2-41f6-96f4-0ed59e9cf9b4 became leader
	I1227 09:37:48.429922       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-963457_65093ed9-f9f2-41f6-96f4-0ed59e9cf9b4!
	W1227 09:37:48.431666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:48.434772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:48.530445       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-963457_65093ed9-f9f2-41f6-96f4-0ed59e9cf9b4!
	W1227 09:37:50.438517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:50.590566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:52.594313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:52.599295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:54.602616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:54.607061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [eb5602ad87854ef23de99c48a34f5a0d1c623cb078e609a2a4b567f1da38de9e] <==
	I1227 09:37:00.270987       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 09:37:30.273209       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963457 -n no-preload-963457
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963457 -n no-preload-963457: exit status 2 (358.817699ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-963457 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-963457
helpers_test.go:244: (dbg) docker inspect no-preload-963457:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177",
	        "Created": "2025-12-27T09:35:31.385556523Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 629770,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:36:49.795104128Z",
	            "FinishedAt": "2025-12-27T09:36:48.916499117Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/hosts",
	        "LogPath": "/var/lib/docker/containers/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177/0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177-json.log",
	        "Name": "/no-preload-963457",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-963457:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-963457",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e530c327725200854097c32fa88d971e9134b10f00f38040c69cbfdb8db2177",
	                "LowerDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a948dd6c51157da59690737cde9043eec469d473ae47400a0e1b93038c0c9f1d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-963457",
	                "Source": "/var/lib/docker/volumes/no-preload-963457/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-963457",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-963457",
	                "name.minikube.sigs.k8s.io": "no-preload-963457",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bf235d27c31ce3c0429809fa70bd10057a327cd3027101e0c55ce1b1aa16f7e8",
	            "SandboxKey": "/var/run/docker/netns/bf235d27c31c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-963457": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e27fce9ec482a6f231f0cd34fc8f67937ff2dfde3915e36e90c5e0b4fd43cbe7",
	                    "EndpointID": "39b0495ba09f2a4dab9cd7a02bcad3e1adea35a5862dd5befdaef68af2353f1b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "a6:fc:de:fe:64:7a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-963457",
	                        "0e530c327725"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963457 -n no-preload-963457
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963457 -n no-preload-963457: exit status 2 (372.967721ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-963457 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-963457 logs -n 25: (1.206202189s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p no-preload-963457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-497722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ image   │ embed-certs-912564 image list --format=json                                                                                                                                                                                                   │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p embed-certs-912564 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-246956 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p auto-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-157923                  │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-246956 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ image   │ newest-cni-246956 image list --format=json                                                                                                                                                                                                    │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p newest-cni-246956 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ delete  │ -p newest-cni-246956                                                                                                                                                                                                                          │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p newest-cni-246956                                                                                                                                                                                                                          │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p kindnet-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-157923               │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ image   │ no-preload-963457 image list --format=json                                                                                                                                                                                                    │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p no-preload-963457 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ image   │ default-k8s-diff-port-497722 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p default-k8s-diff-port-497722 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:37:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:37:46.864671  647990 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:46.864827  647990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:46.864841  647990 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:46.864848  647990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:46.865083  647990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:37:46.865565  647990 out.go:368] Setting JSON to false
	I1227 09:37:46.866843  647990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4811,"bootTime":1766823456,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:37:46.866917  647990 start.go:143] virtualization: kvm guest
	I1227 09:37:46.868763  647990 out.go:179] * [kindnet-157923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:37:46.869915  647990 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:37:46.869935  647990 notify.go:221] Checking for updates...
	I1227 09:37:46.872147  647990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:37:46.873206  647990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:46.874198  647990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:37:46.875076  647990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:37:46.876025  647990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:37:46.877293  647990 config.go:182] Loaded profile config "auto-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877377  647990 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877469  647990 config.go:182] Loaded profile config "no-preload-963457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877582  647990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:37:46.900930  647990 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:37:46.901023  647990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:46.956322  647990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:37:46.946538862 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:46.956429  647990 docker.go:319] overlay module found
	I1227 09:37:46.957771  647990 out.go:179] * Using the docker driver based on user configuration
	I1227 09:37:46.958739  647990 start.go:309] selected driver: docker
	I1227 09:37:46.958754  647990 start.go:928] validating driver "docker" against <nil>
	I1227 09:37:46.958765  647990 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:37:46.959277  647990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:47.014097  647990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:37:47.004076494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:47.014262  647990 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:37:47.014493  647990 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:37:47.015832  647990 out.go:179] * Using Docker driver with root privileges
	I1227 09:37:47.016778  647990 cni.go:84] Creating CNI manager for "kindnet"
	I1227 09:37:47.016807  647990 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:37:47.016889  647990 start.go:353] cluster config:
	{Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:47.018032  647990 out.go:179] * Starting "kindnet-157923" primary control-plane node in "kindnet-157923" cluster
	I1227 09:37:47.018884  647990 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:37:47.019947  647990 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:37:47.021519  647990 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:47.021557  647990 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:37:47.021566  647990 cache.go:65] Caching tarball of preloaded images
	I1227 09:37:47.021665  647990 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:37:47.021690  647990 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:37:47.021942  647990 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:37:47.022225  647990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/config.json ...
	I1227 09:37:47.022262  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/config.json: {Name:mkc9489786022a3c521e082ba47d43b09ee5c209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:47.043904  647990 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:37:47.043922  647990 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:37:47.043937  647990 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:37:47.043970  647990 start.go:360] acquireMachinesLock for kindnet-157923: {Name:mk5cf38a4c59f5d9a1319baf127d324f7051b88d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:37:47.044057  647990 start.go:364] duration metric: took 73.03µs to acquireMachinesLock for "kindnet-157923"
	I1227 09:37:47.044079  647990 start.go:93] Provisioning new machine with config: &{Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:47.044147  647990 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:37:44.467035  640477 addons.go:530] duration metric: took 525.080113ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 09:37:44.738293  640477 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-157923" context rescaled to 1 replicas
	W1227 09:37:46.238781  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	W1227 09:37:48.238941  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	I1227 09:37:47.045517  647990 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:37:47.045710  647990 start.go:159] libmachine.API.Create for "kindnet-157923" (driver="docker")
	I1227 09:37:47.045736  647990 client.go:173] LocalClient.Create starting
	I1227 09:37:47.045836  647990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:37:47.045874  647990 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:47.045891  647990 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:47.045952  647990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:37:47.045975  647990 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:47.045984  647990 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:47.046348  647990 cli_runner.go:164] Run: docker network inspect kindnet-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:37:47.063570  647990 cli_runner.go:211] docker network inspect kindnet-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:37:47.063648  647990 network_create.go:284] running [docker network inspect kindnet-157923] to gather additional debugging logs...
	I1227 09:37:47.063669  647990 cli_runner.go:164] Run: docker network inspect kindnet-157923
	W1227 09:37:47.079738  647990 cli_runner.go:211] docker network inspect kindnet-157923 returned with exit code 1
	I1227 09:37:47.079765  647990 network_create.go:287] error running [docker network inspect kindnet-157923]: docker network inspect kindnet-157923: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-157923 not found
	I1227 09:37:47.079776  647990 network_create.go:289] output of [docker network inspect kindnet-157923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-157923 not found
	
	** /stderr **
	I1227 09:37:47.079880  647990 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:47.096505  647990 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
	I1227 09:37:47.097278  647990 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-21a699476be6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:e8:d9:95:e6:36} reservation:<nil>}
	I1227 09:37:47.097729  647990 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e97c5356905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:d9:6b:42:f5:e3} reservation:<nil>}
	I1227 09:37:47.098498  647990 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d83dd0}
	I1227 09:37:47.098520  647990 network_create.go:124] attempt to create docker network kindnet-157923 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:37:47.098594  647990 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-157923 kindnet-157923
	I1227 09:37:47.148980  647990 network_create.go:108] docker network kindnet-157923 192.168.76.0/24 created
	I1227 09:37:47.149015  647990 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-157923" container
	I1227 09:37:47.149100  647990 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:37:47.166970  647990 cli_runner.go:164] Run: docker volume create kindnet-157923 --label name.minikube.sigs.k8s.io=kindnet-157923 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:37:47.185066  647990 oci.go:103] Successfully created a docker volume kindnet-157923
	I1227 09:37:47.185135  647990 cli_runner.go:164] Run: docker run --rm --name kindnet-157923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-157923 --entrypoint /usr/bin/test -v kindnet-157923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:37:47.721086  647990 oci.go:107] Successfully prepared a docker volume kindnet-157923
	I1227 09:37:47.721191  647990 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:47.721212  647990 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:37:47.721327  647990 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:37:51.492743  647990 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.771349551s)
	I1227 09:37:51.492802  647990 kic.go:203] duration metric: took 3.771572095s to extract preloaded images to volume ...
	W1227 09:37:51.492907  647990 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:37:51.492986  647990 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:37:51.493040  647990 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:37:51.548737  647990 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-157923 --name kindnet-157923 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-157923 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-157923 --network kindnet-157923 --ip 192.168.76.2 --volume kindnet-157923:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:37:51.804564  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Running}}
	I1227 09:37:51.823570  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Status}}
	I1227 09:37:51.841753  647990 cli_runner.go:164] Run: docker exec kindnet-157923 stat /var/lib/dpkg/alternatives/iptables
	W1227 09:37:50.284486  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	W1227 09:37:52.739006  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	I1227 09:37:51.887055  647990 oci.go:144] the created container "kindnet-157923" has a running status.
	I1227 09:37:51.887089  647990 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa...
	I1227 09:37:51.990591  647990 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:37:52.019096  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Status}}
	I1227 09:37:52.044443  647990 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:37:52.044486  647990 kic_runner.go:114] Args: [docker exec --privileged kindnet-157923 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:37:52.088186  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Status}}
	I1227 09:37:52.114053  647990 machine.go:94] provisionDockerMachine start ...
	I1227 09:37:52.114164  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.136116  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.136464  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.136486  647990 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:37:52.273019  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: kindnet-157923
	
	I1227 09:37:52.273050  647990 ubuntu.go:182] provisioning hostname "kindnet-157923"
	I1227 09:37:52.273117  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.292545  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.292899  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.292924  647990 main.go:144] libmachine: About to run SSH command:
	sudo hostname kindnet-157923 && echo "kindnet-157923" | sudo tee /etc/hostname
	I1227 09:37:52.435204  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: kindnet-157923
	
	I1227 09:37:52.435289  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.454377  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.454614  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.454641  647990 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-157923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-157923/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-157923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:37:52.585466  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:37:52.585495  647990 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:37:52.585537  647990 ubuntu.go:190] setting up certificates
	I1227 09:37:52.585558  647990 provision.go:84] configureAuth start
	I1227 09:37:52.585620  647990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-157923
	I1227 09:37:52.606953  647990 provision.go:143] copyHostCerts
	I1227 09:37:52.607022  647990 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:37:52.607041  647990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:37:52.607124  647990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:37:52.607237  647990 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:37:52.607250  647990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:37:52.607292  647990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:37:52.607371  647990 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:37:52.607382  647990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:37:52.607433  647990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:37:52.607502  647990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.kindnet-157923 san=[127.0.0.1 192.168.76.2 kindnet-157923 localhost minikube]
	I1227 09:37:52.773980  647990 provision.go:177] copyRemoteCerts
	I1227 09:37:52.774032  647990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:37:52.774076  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.794674  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:52.891600  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:37:52.910230  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:37:52.927214  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1227 09:37:52.944338  647990 provision.go:87] duration metric: took 358.764637ms to configureAuth
	I1227 09:37:52.944365  647990 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:37:52.944517  647990 config.go:182] Loaded profile config "kindnet-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:52.944628  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.963064  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.963415  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.963445  647990 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:37:53.246993  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:37:53.247019  647990 machine.go:97] duration metric: took 1.13294086s to provisionDockerMachine
	I1227 09:37:53.247032  647990 client.go:176] duration metric: took 6.201286787s to LocalClient.Create
	I1227 09:37:53.247049  647990 start.go:167] duration metric: took 6.201338369s to libmachine.API.Create "kindnet-157923"
	I1227 09:37:53.247059  647990 start.go:293] postStartSetup for "kindnet-157923" (driver="docker")
	I1227 09:37:53.247070  647990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:37:53.247143  647990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:37:53.247196  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.266249  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.361547  647990 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:37:53.365069  647990 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:37:53.365103  647990 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:37:53.365132  647990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:37:53.365203  647990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:37:53.365317  647990 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:37:53.365452  647990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:37:53.372927  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:53.391845  647990 start.go:296] duration metric: took 144.772765ms for postStartSetup
	I1227 09:37:53.392155  647990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-157923
	I1227 09:37:53.411616  647990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/config.json ...
	I1227 09:37:53.411887  647990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:37:53.411930  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.428165  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.516235  647990 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:37:53.521533  647990 start.go:128] duration metric: took 6.477368164s to createHost
	I1227 09:37:53.521559  647990 start.go:83] releasing machines lock for "kindnet-157923", held for 6.477490775s
	I1227 09:37:53.521631  647990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-157923
	I1227 09:37:53.540592  647990 ssh_runner.go:195] Run: cat /version.json
	I1227 09:37:53.540649  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.540666  647990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:37:53.540740  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.560072  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.560339  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.646557  647990 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:53.703207  647990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:37:53.740466  647990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:37:53.746989  647990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:37:53.747094  647990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:37:53.777411  647990 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 09:37:53.777440  647990 start.go:496] detecting cgroup driver to use...
	I1227 09:37:53.777469  647990 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:37:53.777516  647990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:37:53.798340  647990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:37:53.810539  647990 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:37:53.810603  647990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:37:53.826933  647990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:37:53.843174  647990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:37:53.943482  647990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:37:54.039822  647990 docker.go:234] disabling docker service ...
	I1227 09:37:54.039894  647990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:37:54.059526  647990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:37:54.073514  647990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:37:54.160971  647990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:37:54.241500  647990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:37:54.254824  647990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:37:54.269456  647990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:37:54.269503  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.285951  647990 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:37:54.286035  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.297566  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.308615  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.318262  647990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:37:54.326619  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.335391  647990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.348955  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.358248  647990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:37:54.366258  647990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:37:54.374340  647990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:54.463096  647990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:37:54.617694  647990 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:37:54.617767  647990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:37:54.621607  647990 start.go:574] Will wait 60s for crictl version
	I1227 09:37:54.621659  647990 ssh_runner.go:195] Run: which crictl
	I1227 09:37:54.625375  647990 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:37:54.650450  647990 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:37:54.650531  647990 ssh_runner.go:195] Run: crio --version
	I1227 09:37:54.685299  647990 ssh_runner.go:195] Run: crio --version
	I1227 09:37:54.721668  647990 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:37:54.722882  647990 cli_runner.go:164] Run: docker network inspect kindnet-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:54.743659  647990 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:37:54.748134  647990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:54.759743  647990 kubeadm.go:884] updating cluster {Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:37:54.759929  647990 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:54.759989  647990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:54.800025  647990 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:54.800045  647990 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:37:54.800087  647990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:54.828967  647990 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:54.828993  647990 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:37:54.829002  647990 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 09:37:54.829112  647990 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-157923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1227 09:37:54.829214  647990 ssh_runner.go:195] Run: crio config
	I1227 09:37:54.887057  647990 cni.go:84] Creating CNI manager for "kindnet"
	I1227 09:37:54.887091  647990 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:37:54.887123  647990 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-157923 NodeName:kindnet-157923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:37:54.887285  647990 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-157923"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:37:54.887345  647990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:37:54.896116  647990 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:37:54.896181  647990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:37:54.905443  647990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1227 09:37:54.920554  647990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:37:54.936273  647990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1227 09:37:54.950650  647990 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:37:54.954244  647990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:54.964663  647990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:55.065441  647990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:55.087499  647990 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923 for IP: 192.168.76.2
	I1227 09:37:55.087519  647990 certs.go:195] generating shared ca certs ...
	I1227 09:37:55.087537  647990 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.087707  647990 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:37:55.087781  647990 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:37:55.087806  647990 certs.go:257] generating profile certs ...
	I1227 09:37:55.087889  647990 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.key
	I1227 09:37:55.087916  647990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.crt with IP's: []
	I1227 09:37:55.206241  647990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.crt ...
	I1227 09:37:55.206263  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.crt: {Name:mk4b27371e040b28c42b03f162405c1098913b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.206398  647990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.key ...
	I1227 09:37:55.206409  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.key: {Name:mkf82ad75ce77023e0b0b356b51bb304b0e5c28f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.206494  647990 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key.569afa2f
	I1227 09:37:55.206507  647990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt.569afa2f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:37:55.370250  647990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt.569afa2f ...
	I1227 09:37:55.370276  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt.569afa2f: {Name:mkab6a3a74ba9d3018d466bde2b041326b2d56ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.370456  647990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key.569afa2f ...
	I1227 09:37:55.370480  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key.569afa2f: {Name:mk3d9f655a6c22cab3f504a60443de00f5b6d230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.370596  647990 certs.go:382] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt.569afa2f -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt
	I1227 09:37:55.370716  647990 certs.go:386] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key.569afa2f -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key
	I1227 09:37:55.370834  647990 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.key
	I1227 09:37:55.370857  647990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.crt with IP's: []
	I1227 09:37:55.404908  647990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.crt ...
	I1227 09:37:55.404936  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.crt: {Name:mk1c886e2c1537ccc3cb9b9a86fad437a4209670 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.405118  647990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.key ...
	I1227 09:37:55.405145  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.key: {Name:mk583afa8babce870fb3b7c9a396740ac2bba309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.405410  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:55.405465  647990 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:55.405481  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:55.405516  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:55.405552  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:55.405592  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:55.405660  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:55.406283  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:55.427179  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:55.447038  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:55.466089  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:55.486256  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 09:37:55.505373  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:55.524526  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:55.544635  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:37:55.565595  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:55.588733  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:55.608351  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:55.628017  647990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:55.642193  647990 ssh_runner.go:195] Run: openssl version
	I1227 09:37:55.649648  647990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:55.659190  647990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:55.668559  647990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:55.672721  647990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:55.672775  647990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:55.713639  647990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:55.722406  647990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:37:55.730264  647990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:55.738297  647990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:55.746031  647990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:55.749594  647990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:55.749644  647990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:55.789889  647990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:55.798174  647990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/377171.pem /etc/ssl/certs/51391683.0
	I1227 09:37:55.807278  647990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:55.816048  647990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:55.824201  647990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:55.832745  647990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:55.832838  647990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:55.878457  647990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:55.887849  647990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3771712.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:55.897027  647990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:55.901326  647990 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:37:55.901404  647990 kubeadm.go:401] StartCluster: {Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:55.901490  647990 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:55.901543  647990 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:55.935869  647990 cri.go:96] found id: ""
	I1227 09:37:55.935939  647990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:55.944874  647990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:37:55.953265  647990 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:37:55.953400  647990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:37:55.961985  647990 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:37:55.962004  647990 kubeadm.go:158] found existing configuration files:
	
	I1227 09:37:55.962056  647990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:37:55.971658  647990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:37:55.971720  647990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:37:55.981187  647990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:37:55.990532  647990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:37:55.990606  647990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:37:56.000915  647990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:37:56.010527  647990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:37:56.010583  647990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:37:56.021424  647990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:37:56.030746  647990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:37:56.030839  647990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:37:56.038200  647990 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:37:56.081534  647990 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:37:56.081592  647990 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:37:56.161036  647990 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:37:56.161132  647990 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 09:37:56.161178  647990 kubeadm.go:319] OS: Linux
	I1227 09:37:56.161237  647990 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:37:56.161297  647990 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:37:56.161362  647990 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:37:56.161422  647990 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:37:56.161483  647990 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:37:56.161689  647990 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:37:56.161768  647990 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:37:56.161883  647990 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 09:37:56.224744  647990 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:37:56.224938  647990 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:37:56.225063  647990 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:37:56.232440  647990 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Dec 27 09:37:18 no-preload-963457 crio[561]: time="2025-12-27T09:37:18.883350504Z" level=info msg="Started container" PID=1731 containerID=376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper id=3b442bef-5843-46d9-aa14-36b3ed51e206 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5531d82b938e4bbdce99428fb11b73138b7b7ef0e0991bc8ba4569f8d44ba308
	Dec 27 09:37:19 no-preload-963457 crio[561]: time="2025-12-27T09:37:19.945300233Z" level=info msg="Removing container: 6abd451748c9f46e280949ff15c32d7be9f0710b8cb9dacb9c97ed3250d82605" id=288d8cac-92eb-4022-8636-1e4519f70448 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:19 no-preload-963457 crio[561]: time="2025-12-27T09:37:19.956017329Z" level=info msg="Removed container 6abd451748c9f46e280949ff15c32d7be9f0710b8cb9dacb9c97ed3250d82605: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper" id=288d8cac-92eb-4022-8636-1e4519f70448 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.97348749Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1b6422ad-8a72-4f2e-a54b-b3edd41fb1a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.974423517Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b27886af-970f-4c94-9393-0bc712c44bce name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.975479376Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bf58aa41-fd05-4d3b-b56e-423a207f890c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.975622171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.980688395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.980941496Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5790fade8fbcd4a49b64d2a18c566f337f34830cafd6d6cfa9012335083d4d6b/merged/etc/passwd: no such file or directory"
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.980978355Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5790fade8fbcd4a49b64d2a18c566f337f34830cafd6d6cfa9012335083d4d6b/merged/etc/group: no such file or directory"
	Dec 27 09:37:30 no-preload-963457 crio[561]: time="2025-12-27T09:37:30.981836292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:31 no-preload-963457 crio[561]: time="2025-12-27T09:37:31.006712205Z" level=info msg="Created container 65af506e89098a5c5b4ffc8cff2878629f11900d58c4ff5309b81a9befadadd5: kube-system/storage-provisioner/storage-provisioner" id=bf58aa41-fd05-4d3b-b56e-423a207f890c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:31 no-preload-963457 crio[561]: time="2025-12-27T09:37:31.007389521Z" level=info msg="Starting container: 65af506e89098a5c5b4ffc8cff2878629f11900d58c4ff5309b81a9befadadd5" id=b8e1370e-a61b-4adf-97b4-da9c966d944f name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:31 no-preload-963457 crio[561]: time="2025-12-27T09:37:31.00958705Z" level=info msg="Started container" PID=1749 containerID=65af506e89098a5c5b4ffc8cff2878629f11900d58c4ff5309b81a9befadadd5 description=kube-system/storage-provisioner/storage-provisioner id=b8e1370e-a61b-4adf-97b4-da9c966d944f name=/runtime.v1.RuntimeService/StartContainer sandboxID=843b9ed615caba93a86db4ecf531f2b6c3207b997d5a776c5c8d88f51e0a2284
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.839462007Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1b735ef1-b599-4c19-b8f1-5cb973703b99 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.840374829Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b1de8586-431f-405f-a0f4-7633f7182ffd name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.841458793Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper" id=7a41a8a6-4e21-41b7-887e-a65a065240ba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.841611144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.847594353Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.848098641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.878056651Z" level=info msg="Created container 32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper" id=7a41a8a6-4e21-41b7-887e-a65a065240ba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.879158967Z" level=info msg="Starting container: 32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50" id=abf41846-1d47-46c0-a1f8-6cbb0a240014 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:41 no-preload-963457 crio[561]: time="2025-12-27T09:37:41.882360686Z" level=info msg="Started container" PID=1789 containerID=32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper id=abf41846-1d47-46c0-a1f8-6cbb0a240014 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5531d82b938e4bbdce99428fb11b73138b7b7ef0e0991bc8ba4569f8d44ba308
	Dec 27 09:37:42 no-preload-963457 crio[561]: time="2025-12-27T09:37:42.009727423Z" level=info msg="Removing container: 376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855" id=e1af7286-3667-4bea-9eaa-72be4610f17d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:42 no-preload-963457 crio[561]: time="2025-12-27T09:37:42.019203402Z" level=info msg="Removed container 376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2/dashboard-metrics-scraper" id=e1af7286-3667-4bea-9eaa-72be4610f17d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	32f303997e3c2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   3                   5531d82b938e4       dashboard-metrics-scraper-867fb5f87b-qj7z2   kubernetes-dashboard
	65af506e89098       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago      Running             storage-provisioner         1                   843b9ed615cab       storage-provisioner                          kube-system
	86f0772760e7d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   a8768ad8791de       kubernetes-dashboard-b84665fb8-hlxhq         kubernetes-dashboard
	bd093986aac0e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   6c3aa73e31760       busybox                                      default
	14a1ce99783ca       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           56 seconds ago      Running             coredns                     0                   25be004bfa0c9       coredns-7d764666f9-wnzhx                     kube-system
	277a270b3acbc       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           56 seconds ago      Running             kube-proxy                  0                   dfab418fb46c1       kube-proxy-grkqs                             kube-system
	eb5602ad87854       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   843b9ed615cab       storage-provisioner                          kube-system
	90f6040bd590b       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           56 seconds ago      Running             kindnet-cni                 0                   853e671abc275       kindnet-7kw8b                                kube-system
	3e2d56e4ec07d       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           58 seconds ago      Running             kube-apiserver              0                   5ba47ece9636d       kube-apiserver-no-preload-963457             kube-system
	0edde0ef00356       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           58 seconds ago      Running             kube-scheduler              0                   d30d42f94dcae       kube-scheduler-no-preload-963457             kube-system
	03b54f84cfa7e       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           58 seconds ago      Running             kube-controller-manager     0                   1d3bd7cfc1412       kube-controller-manager-no-preload-963457    kube-system
	716a7952d1fa9       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           58 seconds ago      Running             etcd                        0                   e51ec478886f5       etcd-no-preload-963457                       kube-system
	
	
	==> coredns [14a1ce99783ca90e3f5c758ff748c1c1f76f41c8636b97b01bcf78d741646fd6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59521 - 40485 "HINFO IN 4872048296174984903.8139166939014044202. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024238943s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-963457
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-963457
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=no-preload-963457
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_35_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:35:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-963457
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:37:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:37:51 +0000   Sat, 27 Dec 2025 09:35:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:37:51 +0000   Sat, 27 Dec 2025 09:35:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:37:51 +0000   Sat, 27 Dec 2025 09:35:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:37:51 +0000   Sat, 27 Dec 2025 09:36:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-963457
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                d821149c-44f6-4337-913a-683907f0e23a
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-7d764666f9-wnzhx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-no-preload-963457                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-7kw8b                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-963457              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-963457     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-grkqs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-963457              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-qj7z2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-hlxhq          0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  116s  node-controller  Node no-preload-963457 event: Registered Node no-preload-963457 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node no-preload-963457 event: Registered Node no-preload-963457 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [716a7952d1fa9945a526436df75297cbf883fb889ba62f53b3ae1e94790bfeaa] <==
	{"level":"info","ts":"2025-12-27T09:36:58.407046Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T09:36:58.407090Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T09:36:58.407105Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T09:36:58.407137Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T09:36:58.695608Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:58.695653Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:58.695715Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T09:36:58.695727Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:36:58.695741Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:58.696308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:58.696355Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:36:58.696378Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:58.696388Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T09:36:58.697027Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-963457 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:36:58.697068Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:36:58.697040Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:36:58.697246Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:36:58.697295Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:36:58.698448Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:36:58.698513Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:36:58.701446Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T09:36:58.702449Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:37:50.589404Z","caller":"traceutil/trace.go:172","msg":"trace[463382398] transaction","detail":"{read_only:false; response_revision:697; number_of_response:1; }","duration":"148.322536ms","start":"2025-12-27T09:37:50.441006Z","end":"2025-12-27T09:37:50.589329Z","steps":["trace[463382398] 'process raft request'  (duration: 148.191312ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:37:51.257511Z","caller":"traceutil/trace.go:172","msg":"trace[642364407] transaction","detail":"{read_only:false; response_revision:700; number_of_response:1; }","duration":"123.08666ms","start":"2025-12-27T09:37:51.134408Z","end":"2025-12-27T09:37:51.257495Z","steps":["trace[642364407] 'process raft request'  (duration: 119.449026ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T09:37:51.335047Z","caller":"traceutil/trace.go:172","msg":"trace[152291325] transaction","detail":"{read_only:false; response_revision:701; number_of_response:1; }","duration":"172.565543ms","start":"2025-12-27T09:37:51.162465Z","end":"2025-12-27T09:37:51.335031Z","steps":["trace[152291325] 'process raft request'  (duration: 172.47348ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:37:57 up  1:20,  0 user,  load average: 3.07, 3.08, 2.37
	Linux no-preload-963457 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [90f6040bd590bec7f0928127c1409b1a37c5c1c010abbf933230ee623c4fceca] <==
	I1227 09:37:00.498875       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:37:00.499137       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 09:37:00.499329       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:37:00.499358       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:37:00.499385       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:37:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:37:00.701034       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:37:00.701066       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:37:00.701079       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:37:00.701538       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:37:01.201656       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:37:01.201688       1 metrics.go:72] Registering metrics
	I1227 09:37:01.201730       1 controller.go:711] "Syncing nftables rules"
	I1227 09:37:10.701735       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:37:10.701822       1 main.go:301] handling current node
	I1227 09:37:20.701717       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:37:20.701752       1 main.go:301] handling current node
	I1227 09:37:30.701288       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:37:30.701335       1 main.go:301] handling current node
	I1227 09:37:40.701912       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:37:40.701954       1 main.go:301] handling current node
	I1227 09:37:50.706257       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 09:37:50.706292       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e2d56e4ec07d845c73c075df87007dd294313f8f25d93e2e062eae21343461c] <==
	I1227 09:36:59.812090       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 09:36:59.812097       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 09:36:59.812103       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:36:59.812204       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 09:36:59.812217       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 09:36:59.812363       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 09:36:59.813186       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 09:36:59.813315       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 09:36:59.820940       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:36:59.828846       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 09:36:59.829648       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:36:59.838572       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1227 09:36:59.844758       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 09:36:59.880380       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 09:36:59.932597       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:37:00.238449       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:37:00.291099       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:37:00.310651       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:37:00.318367       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:37:00.361663       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.170.64"}
	I1227 09:37:00.371947       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.146.192"}
	I1227 09:37:00.716096       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:37:03.387192       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:37:03.543572       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:37:03.592299       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [03b54f84cfa7e0b506b3122cd323cd0db22c3c4310cfedd9769eeb770ec9a426] <==
	I1227 09:37:02.939621       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.939622       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.939630       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941405       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941406       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941432       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941444       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941460       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941470       1 range_allocator.go:177] "Sending events to api server"
	I1227 09:37:02.941505       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 09:37:02.941509       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:02.941512       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941521       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941534       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941574       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941603       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941638       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941658       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.941695       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:02.949360       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:02.949982       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:03.044972       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:03.044991       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:37:03.044996       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:37:03.050372       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [277a270b3acbca9938619dd72bc982e6116059d92f6771afd9e2f497d7dd77b4] <==
	I1227 09:37:00.299000       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:37:00.368169       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:00.469062       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:00.469092       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 09:37:00.469623       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:37:00.495891       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:37:00.495951       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:37:00.502697       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:37:00.503127       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:37:00.503159       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:37:00.504465       1 config.go:200] "Starting service config controller"
	I1227 09:37:00.504533       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:37:00.504587       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:37:00.504612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:37:00.504652       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:37:00.504685       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:37:00.505177       1 config.go:309] "Starting node config controller"
	I1227 09:37:00.505200       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:37:00.505209       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:37:00.605174       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:37:00.605211       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 09:37:00.605210       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0edde0ef003566fce4556ceb0a3d7cbc04d2cd6685f5afd803595f3ababb1338] <==
	I1227 09:36:58.632312       1 serving.go:386] Generated self-signed cert in-memory
	W1227 09:36:59.732558       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:36:59.732595       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:36:59.732605       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:36:59.732615       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:36:59.788632       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 09:36:59.788685       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:36:59.794043       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 09:36:59.794589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:36:59.799902       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:36:59.795642       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 09:36:59.900403       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:37:18 no-preload-963457 kubelet[714]: E1227 09:37:18.939553     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:19 no-preload-963457 kubelet[714]: I1227 09:37:19.020121     714 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" podStartSLOduration=1.211529696 podStartE2EDuration="16.02010183s" podCreationTimestamp="2025-12-27 09:37:03 +0000 UTC" firstStartedPulling="2025-12-27 09:37:04.032342742 +0000 UTC m=+6.287426508" lastFinishedPulling="2025-12-27 09:37:18.840914887 +0000 UTC m=+21.095998642" observedRunningTime="2025-12-27 09:37:19.019964909 +0000 UTC m=+21.275048685" watchObservedRunningTime="2025-12-27 09:37:19.02010183 +0000 UTC m=+21.275185608"
	Dec 27 09:37:19 no-preload-963457 kubelet[714]: I1227 09:37:19.943948     714 scope.go:122] "RemoveContainer" containerID="6abd451748c9f46e280949ff15c32d7be9f0710b8cb9dacb9c97ed3250d82605"
	Dec 27 09:37:19 no-preload-963457 kubelet[714]: E1227 09:37:19.944112     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:19 no-preload-963457 kubelet[714]: I1227 09:37:19.944135     714 scope.go:122] "RemoveContainer" containerID="376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855"
	Dec 27 09:37:19 no-preload-963457 kubelet[714]: E1227 09:37:19.944322     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qj7z2_kubernetes-dashboard(0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" podUID="0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa"
	Dec 27 09:37:24 no-preload-963457 kubelet[714]: E1227 09:37:24.248588     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:24 no-preload-963457 kubelet[714]: I1227 09:37:24.248650     714 scope.go:122] "RemoveContainer" containerID="376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855"
	Dec 27 09:37:24 no-preload-963457 kubelet[714]: E1227 09:37:24.248965     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qj7z2_kubernetes-dashboard(0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" podUID="0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa"
	Dec 27 09:37:30 no-preload-963457 kubelet[714]: I1227 09:37:30.973012     714 scope.go:122] "RemoveContainer" containerID="eb5602ad87854ef23de99c48a34f5a0d1c623cb078e609a2a4b567f1da38de9e"
	Dec 27 09:37:38 no-preload-963457 kubelet[714]: E1227 09:37:38.654243     714 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wnzhx" containerName="coredns"
	Dec 27 09:37:41 no-preload-963457 kubelet[714]: E1227 09:37:41.838997     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:41 no-preload-963457 kubelet[714]: I1227 09:37:41.839032     714 scope.go:122] "RemoveContainer" containerID="376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855"
	Dec 27 09:37:42 no-preload-963457 kubelet[714]: I1227 09:37:42.006739     714 scope.go:122] "RemoveContainer" containerID="376c6336427a2b0af7525abe2d6342dabd3af6c951a46105f82ccfcb95200855"
	Dec 27 09:37:42 no-preload-963457 kubelet[714]: E1227 09:37:42.006960     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:42 no-preload-963457 kubelet[714]: I1227 09:37:42.006992     714 scope.go:122] "RemoveContainer" containerID="32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50"
	Dec 27 09:37:42 no-preload-963457 kubelet[714]: E1227 09:37:42.007186     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qj7z2_kubernetes-dashboard(0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" podUID="0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa"
	Dec 27 09:37:44 no-preload-963457 kubelet[714]: E1227 09:37:44.248497     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:44 no-preload-963457 kubelet[714]: I1227 09:37:44.248539     714 scope.go:122] "RemoveContainer" containerID="32f303997e3c2793c1602c91acb4c248bc7aa4fb7fcd92f892f04f2ea3647d50"
	Dec 27 09:37:44 no-preload-963457 kubelet[714]: E1227 09:37:44.248700     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qj7z2_kubernetes-dashboard(0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qj7z2" podUID="0dfd0f3d-b58f-4afe-ba92-d1dd627e08aa"
	Dec 27 09:37:52 no-preload-963457 kubelet[714]: I1227 09:37:52.637980     714 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 09:37:52 no-preload-963457 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:37:52 no-preload-963457 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:37:52 no-preload-963457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:37:52 no-preload-963457 systemd[1]: kubelet.service: Consumed 1.839s CPU time.
	
	
	==> kubernetes-dashboard [86f0772760e7d28ea35556b6cafec992e2b1fa6a83846afaed7db6f327a4aed4] <==
	2025/12/27 09:37:11 Starting overwatch
	2025/12/27 09:37:11 Using namespace: kubernetes-dashboard
	2025/12/27 09:37:11 Using in-cluster config to connect to apiserver
	2025/12/27 09:37:11 Using secret token for csrf signing
	2025/12/27 09:37:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 09:37:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 09:37:11 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 09:37:11 Generating JWE encryption key
	2025/12/27 09:37:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 09:37:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 09:37:12 Initializing JWE encryption key from synchronized object
	2025/12/27 09:37:12 Creating in-cluster Sidecar client
	2025/12/27 09:37:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 09:37:12 Serving insecurely on HTTP port: 9090
	2025/12/27 09:37:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [65af506e89098a5c5b4ffc8cff2878629f11900d58c4ff5309b81a9befadadd5] <==
	I1227 09:37:31.023586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:37:31.031275       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:37:31.031331       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 09:37:31.033481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:34.489490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:38.749718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:42.349154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:45.403093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:48.425470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:48.429560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:48.429706       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:37:48.429862       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe1a3181-a427-43a3-94cb-fd67a4c65111", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-963457_65093ed9-f9f2-41f6-96f4-0ed59e9cf9b4 became leader
	I1227 09:37:48.429922       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-963457_65093ed9-f9f2-41f6-96f4-0ed59e9cf9b4!
	W1227 09:37:48.431666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:48.434772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:48.530445       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-963457_65093ed9-f9f2-41f6-96f4-0ed59e9cf9b4!
	W1227 09:37:50.438517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:50.590566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:52.594313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:52.599295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:54.602616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:54.607061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:56.610243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:56.615856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [eb5602ad87854ef23de99c48a34f5a0d1c623cb078e609a2a4b567f1da38de9e] <==
	I1227 09:37:00.270987       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 09:37:30.273209       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963457 -n no-preload-963457
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963457 -n no-preload-963457: exit status 2 (372.781697ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-963457 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-497722 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-497722 --alsologtostderr -v=1: exit status 80 (1.713146245s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-497722 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:37:53.086560  649623 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:53.086660  649623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:53.086667  649623 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:53.086673  649623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:53.086868  649623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:37:53.087106  649623 out.go:368] Setting JSON to false
	I1227 09:37:53.087127  649623 mustload.go:66] Loading cluster: default-k8s-diff-port-497722
	I1227 09:37:53.087453  649623 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:53.087878  649623 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-497722 --format={{.State.Status}}
	I1227 09:37:53.106749  649623 host.go:66] Checking if "default-k8s-diff-port-497722" exists ...
	I1227 09:37:53.107251  649623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:53.171044  649623 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-27 09:37:53.160199071 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:53.172450  649623 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-497722 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 09:37:53.176805  649623 out.go:179] * Pausing node default-k8s-diff-port-497722 ... 
	I1227 09:37:53.177919  649623 host.go:66] Checking if "default-k8s-diff-port-497722" exists ...
	I1227 09:37:53.178281  649623 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:53.178337  649623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-497722
	I1227 09:37:53.198837  649623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/default-k8s-diff-port-497722/id_rsa Username:docker}
	I1227 09:37:53.290303  649623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:53.302962  649623 pause.go:52] kubelet running: true
	I1227 09:37:53.303026  649623 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:53.480198  649623 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:53.480270  649623 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:53.547432  649623 cri.go:96] found id: "cbe3be0499b1e292ff93ccb690ac42127d4b7da2ccfb910a73b5aa34c1bbfa2c"
	I1227 09:37:53.547456  649623 cri.go:96] found id: "772b7ab9392d0f203c9344bdcc4efd5006401e9b969058872cfb1d1c6c1826b6"
	I1227 09:37:53.547462  649623 cri.go:96] found id: "30fa25883572a506bd41a669b17278222880dcd3554abbe51f237350512fd65e"
	I1227 09:37:53.547467  649623 cri.go:96] found id: "def2adfb364c1d7110ce2224f01de177f744737fe0b1a459d15cf21d75aa4b3c"
	I1227 09:37:53.547472  649623 cri.go:96] found id: "37e5bbef1518ab34328ea74c051dd01a3e28b525871fc9d8ccc080b6095d603d"
	I1227 09:37:53.547476  649623 cri.go:96] found id: "f7a05d26251e8a9f4091116e036a79a8654a182636649e96285fc252c0530199"
	I1227 09:37:53.547482  649623 cri.go:96] found id: "05ea82c1bf330e30a293e6d7ea1c01b1766ebce50eeaf8868bbcf622fd71d8e8"
	I1227 09:37:53.547487  649623 cri.go:96] found id: "0c8a5140613f466c8107ca22c4400874507b9d96db9fc14bb7f9ecf967957942"
	I1227 09:37:53.547491  649623 cri.go:96] found id: "0f35c56ae0629feaaf5192c69a9a652f101e67591c4c93f500daa6ceb2a62911"
	I1227 09:37:53.547502  649623 cri.go:96] found id: "a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964"
	I1227 09:37:53.547510  649623 cri.go:96] found id: "54e4f5a34233f7629865a95e361beb06cbd770029b4c9e51b5593b268204cffc"
	I1227 09:37:53.547514  649623 cri.go:96] found id: ""
	I1227 09:37:53.547557  649623 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:53.561209  649623 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:53Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:37:53.690592  649623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:53.703405  649623 pause.go:52] kubelet running: false
	I1227 09:37:53.703452  649623 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:53.868149  649623 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:53.868248  649623 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:53.944748  649623 cri.go:96] found id: "cbe3be0499b1e292ff93ccb690ac42127d4b7da2ccfb910a73b5aa34c1bbfa2c"
	I1227 09:37:53.944774  649623 cri.go:96] found id: "772b7ab9392d0f203c9344bdcc4efd5006401e9b969058872cfb1d1c6c1826b6"
	I1227 09:37:53.944781  649623 cri.go:96] found id: "30fa25883572a506bd41a669b17278222880dcd3554abbe51f237350512fd65e"
	I1227 09:37:53.944787  649623 cri.go:96] found id: "def2adfb364c1d7110ce2224f01de177f744737fe0b1a459d15cf21d75aa4b3c"
	I1227 09:37:53.944802  649623 cri.go:96] found id: "37e5bbef1518ab34328ea74c051dd01a3e28b525871fc9d8ccc080b6095d603d"
	I1227 09:37:53.944807  649623 cri.go:96] found id: "f7a05d26251e8a9f4091116e036a79a8654a182636649e96285fc252c0530199"
	I1227 09:37:53.944811  649623 cri.go:96] found id: "05ea82c1bf330e30a293e6d7ea1c01b1766ebce50eeaf8868bbcf622fd71d8e8"
	I1227 09:37:53.944816  649623 cri.go:96] found id: "0c8a5140613f466c8107ca22c4400874507b9d96db9fc14bb7f9ecf967957942"
	I1227 09:37:53.944820  649623 cri.go:96] found id: "0f35c56ae0629feaaf5192c69a9a652f101e67591c4c93f500daa6ceb2a62911"
	I1227 09:37:53.944828  649623 cri.go:96] found id: "a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964"
	I1227 09:37:53.944832  649623 cri.go:96] found id: "54e4f5a34233f7629865a95e361beb06cbd770029b4c9e51b5593b268204cffc"
	I1227 09:37:53.944837  649623 cri.go:96] found id: ""
	I1227 09:37:53.944881  649623 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:54.457491  649623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:37:54.475896  649623 pause.go:52] kubelet running: false
	I1227 09:37:54.475960  649623 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:37:54.643922  649623 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:37:54.644018  649623 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:37:54.718642  649623 cri.go:96] found id: "cbe3be0499b1e292ff93ccb690ac42127d4b7da2ccfb910a73b5aa34c1bbfa2c"
	I1227 09:37:54.718667  649623 cri.go:96] found id: "772b7ab9392d0f203c9344bdcc4efd5006401e9b969058872cfb1d1c6c1826b6"
	I1227 09:37:54.718673  649623 cri.go:96] found id: "30fa25883572a506bd41a669b17278222880dcd3554abbe51f237350512fd65e"
	I1227 09:37:54.718678  649623 cri.go:96] found id: "def2adfb364c1d7110ce2224f01de177f744737fe0b1a459d15cf21d75aa4b3c"
	I1227 09:37:54.718683  649623 cri.go:96] found id: "37e5bbef1518ab34328ea74c051dd01a3e28b525871fc9d8ccc080b6095d603d"
	I1227 09:37:54.718689  649623 cri.go:96] found id: "f7a05d26251e8a9f4091116e036a79a8654a182636649e96285fc252c0530199"
	I1227 09:37:54.718694  649623 cri.go:96] found id: "05ea82c1bf330e30a293e6d7ea1c01b1766ebce50eeaf8868bbcf622fd71d8e8"
	I1227 09:37:54.718698  649623 cri.go:96] found id: "0c8a5140613f466c8107ca22c4400874507b9d96db9fc14bb7f9ecf967957942"
	I1227 09:37:54.718703  649623 cri.go:96] found id: "0f35c56ae0629feaaf5192c69a9a652f101e67591c4c93f500daa6ceb2a62911"
	I1227 09:37:54.718712  649623 cri.go:96] found id: "a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964"
	I1227 09:37:54.718716  649623 cri.go:96] found id: "54e4f5a34233f7629865a95e361beb06cbd770029b4c9e51b5593b268204cffc"
	I1227 09:37:54.718721  649623 cri.go:96] found id: ""
	I1227 09:37:54.718777  649623 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:37:54.734389  649623 out.go:203] 
	W1227 09:37:54.735506  649623 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:37:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:37:54.735523  649623 out.go:285] * 
	* 
	W1227 09:37:54.738712  649623 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:37:54.740244  649623 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-497722 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-497722
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-497722:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288",
	        "Created": "2025-12-27T09:35:53.140774946Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 631890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:36:56.395973569Z",
	            "FinishedAt": "2025-12-27T09:36:54.268457697Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/hostname",
	        "HostsPath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/hosts",
	        "LogPath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288-json.log",
	        "Name": "/default-k8s-diff-port-497722",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-497722:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-497722",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288",
	                "LowerDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-497722",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-497722/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-497722",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-497722",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-497722",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bdf3bd411840e79ffa85ddd2cdcf7664c7a9612af15aca0529d23d654d5b90dc",
	            "SandboxKey": "/var/run/docker/netns/bdf3bd411840",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-497722": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3e6df945c4cd8ddce46059c83cc7bed3e6a73494a8b947e27d2c18ad8eacf919",
	                    "EndpointID": "e8aa79f3440b532c08e945e346b16babaa3380c5634f492816d85115cd823daa",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "82:ad:5a:7b:85:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-497722",
	                        "69d33a148b7c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722: exit status 2 (368.323296ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-497722 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-497722 logs -n 25: (1.16270182s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p no-preload-963457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-497722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ image   │ embed-certs-912564 image list --format=json                                                                                                                                                                                                   │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p embed-certs-912564 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-246956 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p auto-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-157923                  │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-246956 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ image   │ newest-cni-246956 image list --format=json                                                                                                                                                                                                    │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p newest-cni-246956 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ delete  │ -p newest-cni-246956                                                                                                                                                                                                                          │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p newest-cni-246956                                                                                                                                                                                                                          │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p kindnet-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-157923               │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ image   │ no-preload-963457 image list --format=json                                                                                                                                                                                                    │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p no-preload-963457 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ image   │ default-k8s-diff-port-497722 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p default-k8s-diff-port-497722 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:37:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:37:46.864671  647990 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:46.864827  647990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:46.864841  647990 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:46.864848  647990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:46.865083  647990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:37:46.865565  647990 out.go:368] Setting JSON to false
	I1227 09:37:46.866843  647990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4811,"bootTime":1766823456,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:37:46.866917  647990 start.go:143] virtualization: kvm guest
	I1227 09:37:46.868763  647990 out.go:179] * [kindnet-157923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:37:46.869915  647990 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:37:46.869935  647990 notify.go:221] Checking for updates...
	I1227 09:37:46.872147  647990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:37:46.873206  647990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:46.874198  647990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:37:46.875076  647990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:37:46.876025  647990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:37:46.877293  647990 config.go:182] Loaded profile config "auto-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877377  647990 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877469  647990 config.go:182] Loaded profile config "no-preload-963457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877582  647990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:37:46.900930  647990 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:37:46.901023  647990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:46.956322  647990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:37:46.946538862 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:46.956429  647990 docker.go:319] overlay module found
	I1227 09:37:46.957771  647990 out.go:179] * Using the docker driver based on user configuration
	I1227 09:37:46.958739  647990 start.go:309] selected driver: docker
	I1227 09:37:46.958754  647990 start.go:928] validating driver "docker" against <nil>
	I1227 09:37:46.958765  647990 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:37:46.959277  647990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:47.014097  647990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:37:47.004076494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:47.014262  647990 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:37:47.014493  647990 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:37:47.015832  647990 out.go:179] * Using Docker driver with root privileges
	I1227 09:37:47.016778  647990 cni.go:84] Creating CNI manager for "kindnet"
	I1227 09:37:47.016807  647990 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:37:47.016889  647990 start.go:353] cluster config:
	{Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:47.018032  647990 out.go:179] * Starting "kindnet-157923" primary control-plane node in "kindnet-157923" cluster
	I1227 09:37:47.018884  647990 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:37:47.019947  647990 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:37:47.021519  647990 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:47.021557  647990 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:37:47.021566  647990 cache.go:65] Caching tarball of preloaded images
	I1227 09:37:47.021665  647990 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:37:47.021690  647990 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:37:47.021942  647990 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:37:47.022225  647990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/config.json ...
	I1227 09:37:47.022262  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/config.json: {Name:mkc9489786022a3c521e082ba47d43b09ee5c209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:47.043904  647990 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:37:47.043922  647990 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:37:47.043937  647990 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:37:47.043970  647990 start.go:360] acquireMachinesLock for kindnet-157923: {Name:mk5cf38a4c59f5d9a1319baf127d324f7051b88d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:37:47.044057  647990 start.go:364] duration metric: took 73.03µs to acquireMachinesLock for "kindnet-157923"
	I1227 09:37:47.044079  647990 start.go:93] Provisioning new machine with config: &{Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:47.044147  647990 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:37:44.467035  640477 addons.go:530] duration metric: took 525.080113ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 09:37:44.738293  640477 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-157923" context rescaled to 1 replicas
	W1227 09:37:46.238781  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	W1227 09:37:48.238941  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	I1227 09:37:47.045517  647990 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:37:47.045710  647990 start.go:159] libmachine.API.Create for "kindnet-157923" (driver="docker")
	I1227 09:37:47.045736  647990 client.go:173] LocalClient.Create starting
	I1227 09:37:47.045836  647990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:37:47.045874  647990 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:47.045891  647990 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:47.045952  647990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:37:47.045975  647990 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:47.045984  647990 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:47.046348  647990 cli_runner.go:164] Run: docker network inspect kindnet-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:37:47.063570  647990 cli_runner.go:211] docker network inspect kindnet-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:37:47.063648  647990 network_create.go:284] running [docker network inspect kindnet-157923] to gather additional debugging logs...
	I1227 09:37:47.063669  647990 cli_runner.go:164] Run: docker network inspect kindnet-157923
	W1227 09:37:47.079738  647990 cli_runner.go:211] docker network inspect kindnet-157923 returned with exit code 1
	I1227 09:37:47.079765  647990 network_create.go:287] error running [docker network inspect kindnet-157923]: docker network inspect kindnet-157923: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-157923 not found
	I1227 09:37:47.079776  647990 network_create.go:289] output of [docker network inspect kindnet-157923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-157923 not found
	
	** /stderr **
	I1227 09:37:47.079880  647990 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:47.096505  647990 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
	I1227 09:37:47.097278  647990 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-21a699476be6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:e8:d9:95:e6:36} reservation:<nil>}
	I1227 09:37:47.097729  647990 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e97c5356905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:d9:6b:42:f5:e3} reservation:<nil>}
	I1227 09:37:47.098498  647990 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d83dd0}
	I1227 09:37:47.098520  647990 network_create.go:124] attempt to create docker network kindnet-157923 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:37:47.098594  647990 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-157923 kindnet-157923
	I1227 09:37:47.148980  647990 network_create.go:108] docker network kindnet-157923 192.168.76.0/24 created
	I1227 09:37:47.149015  647990 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-157923" container
	I1227 09:37:47.149100  647990 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:37:47.166970  647990 cli_runner.go:164] Run: docker volume create kindnet-157923 --label name.minikube.sigs.k8s.io=kindnet-157923 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:37:47.185066  647990 oci.go:103] Successfully created a docker volume kindnet-157923
	I1227 09:37:47.185135  647990 cli_runner.go:164] Run: docker run --rm --name kindnet-157923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-157923 --entrypoint /usr/bin/test -v kindnet-157923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:37:47.721086  647990 oci.go:107] Successfully prepared a docker volume kindnet-157923
	I1227 09:37:47.721191  647990 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:47.721212  647990 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:37:47.721327  647990 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:37:51.492743  647990 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.771349551s)
	I1227 09:37:51.492802  647990 kic.go:203] duration metric: took 3.771572095s to extract preloaded images to volume ...
	W1227 09:37:51.492907  647990 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:37:51.492986  647990 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:37:51.493040  647990 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:37:51.548737  647990 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-157923 --name kindnet-157923 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-157923 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-157923 --network kindnet-157923 --ip 192.168.76.2 --volume kindnet-157923:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:37:51.804564  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Running}}
	I1227 09:37:51.823570  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Status}}
	I1227 09:37:51.841753  647990 cli_runner.go:164] Run: docker exec kindnet-157923 stat /var/lib/dpkg/alternatives/iptables
	W1227 09:37:50.284486  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	W1227 09:37:52.739006  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	I1227 09:37:51.887055  647990 oci.go:144] the created container "kindnet-157923" has a running status.
	I1227 09:37:51.887089  647990 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa...
	I1227 09:37:51.990591  647990 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:37:52.019096  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Status}}
	I1227 09:37:52.044443  647990 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:37:52.044486  647990 kic_runner.go:114] Args: [docker exec --privileged kindnet-157923 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:37:52.088186  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Status}}
	I1227 09:37:52.114053  647990 machine.go:94] provisionDockerMachine start ...
	I1227 09:37:52.114164  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.136116  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.136464  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.136486  647990 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:37:52.273019  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: kindnet-157923
	
	I1227 09:37:52.273050  647990 ubuntu.go:182] provisioning hostname "kindnet-157923"
	I1227 09:37:52.273117  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.292545  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.292899  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.292924  647990 main.go:144] libmachine: About to run SSH command:
	sudo hostname kindnet-157923 && echo "kindnet-157923" | sudo tee /etc/hostname
	I1227 09:37:52.435204  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: kindnet-157923
	
	I1227 09:37:52.435289  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.454377  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.454614  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.454641  647990 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-157923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-157923/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-157923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:37:52.585466  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:37:52.585495  647990 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:37:52.585537  647990 ubuntu.go:190] setting up certificates
	I1227 09:37:52.585558  647990 provision.go:84] configureAuth start
	I1227 09:37:52.585620  647990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-157923
	I1227 09:37:52.606953  647990 provision.go:143] copyHostCerts
	I1227 09:37:52.607022  647990 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:37:52.607041  647990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:37:52.607124  647990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:37:52.607237  647990 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:37:52.607250  647990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:37:52.607292  647990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:37:52.607371  647990 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:37:52.607382  647990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:37:52.607433  647990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:37:52.607502  647990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.kindnet-157923 san=[127.0.0.1 192.168.76.2 kindnet-157923 localhost minikube]
	I1227 09:37:52.773980  647990 provision.go:177] copyRemoteCerts
	I1227 09:37:52.774032  647990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:37:52.774076  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.794674  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:52.891600  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:37:52.910230  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:37:52.927214  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1227 09:37:52.944338  647990 provision.go:87] duration metric: took 358.764637ms to configureAuth
	I1227 09:37:52.944365  647990 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:37:52.944517  647990 config.go:182] Loaded profile config "kindnet-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:52.944628  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.963064  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.963415  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.963445  647990 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:37:53.246993  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:37:53.247019  647990 machine.go:97] duration metric: took 1.13294086s to provisionDockerMachine
	I1227 09:37:53.247032  647990 client.go:176] duration metric: took 6.201286787s to LocalClient.Create
	I1227 09:37:53.247049  647990 start.go:167] duration metric: took 6.201338369s to libmachine.API.Create "kindnet-157923"
	I1227 09:37:53.247059  647990 start.go:293] postStartSetup for "kindnet-157923" (driver="docker")
	I1227 09:37:53.247070  647990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:37:53.247143  647990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:37:53.247196  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.266249  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.361547  647990 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:37:53.365069  647990 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:37:53.365103  647990 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:37:53.365132  647990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:37:53.365203  647990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:37:53.365317  647990 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:37:53.365452  647990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:37:53.372927  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:53.391845  647990 start.go:296] duration metric: took 144.772765ms for postStartSetup
	I1227 09:37:53.392155  647990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-157923
	I1227 09:37:53.411616  647990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/config.json ...
	I1227 09:37:53.411887  647990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:37:53.411930  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.428165  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.516235  647990 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:37:53.521533  647990 start.go:128] duration metric: took 6.477368164s to createHost
	I1227 09:37:53.521559  647990 start.go:83] releasing machines lock for "kindnet-157923", held for 6.477490775s
	I1227 09:37:53.521631  647990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-157923
	I1227 09:37:53.540592  647990 ssh_runner.go:195] Run: cat /version.json
	I1227 09:37:53.540649  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.540666  647990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:37:53.540740  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.560072  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.560339  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.646557  647990 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:53.703207  647990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:37:53.740466  647990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:37:53.746989  647990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:37:53.747094  647990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:37:53.777411  647990 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 09:37:53.777440  647990 start.go:496] detecting cgroup driver to use...
	I1227 09:37:53.777469  647990 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:37:53.777516  647990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:37:53.798340  647990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:37:53.810539  647990 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:37:53.810603  647990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:37:53.826933  647990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:37:53.843174  647990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:37:53.943482  647990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:37:54.039822  647990 docker.go:234] disabling docker service ...
	I1227 09:37:54.039894  647990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:37:54.059526  647990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:37:54.073514  647990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:37:54.160971  647990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:37:54.241500  647990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:37:54.254824  647990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:37:54.269456  647990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:37:54.269503  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.285951  647990 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:37:54.286035  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.297566  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.308615  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.318262  647990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:37:54.326619  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.335391  647990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.348955  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.358248  647990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:37:54.366258  647990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:37:54.374340  647990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:54.463096  647990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:37:54.617694  647990 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:37:54.617767  647990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:37:54.621607  647990 start.go:574] Will wait 60s for crictl version
	I1227 09:37:54.621659  647990 ssh_runner.go:195] Run: which crictl
	I1227 09:37:54.625375  647990 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:37:54.650450  647990 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:37:54.650531  647990 ssh_runner.go:195] Run: crio --version
	I1227 09:37:54.685299  647990 ssh_runner.go:195] Run: crio --version
	I1227 09:37:54.721668  647990 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	
	
	==> CRI-O <==
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.812273517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.812954179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.848248594Z" level=info msg="Created container a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk/dashboard-metrics-scraper" id=924a81c9-499e-4c8c-b584-a1aad10b8a56 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.848908135Z" level=info msg="Starting container: a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964" id=dcddc79c-3d47-4f2c-8945-ac52a88a4975 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.851001862Z" level=info msg="Started container" PID=1755 containerID=a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk/dashboard-metrics-scraper id=dcddc79c-3d47-4f2c-8945-ac52a88a4975 name=/runtime.v1.RuntimeService/StartContainer sandboxID=97eab3c2bf8d727c3dc446ab290e6f73cee8ff36a2eae9d478616b9c1c9c756f
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.975269999Z" level=info msg="Removing container: 06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187" id=f3d84bf0-be0c-4b22-adf1-b46286b59e83 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.987418074Z" level=info msg="Removed container 06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk/dashboard-metrics-scraper" id=f3d84bf0-be0c-4b22-adf1-b46286b59e83 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.579912375Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.584387966Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.584412546Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.584430056Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.588123573Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.58814912Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.588166274Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.591585682Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.591607136Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.591622981Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.594994394Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.595017666Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.595034421Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.598327483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.598347228Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.598363449Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.602099154Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.602127929Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a14b289c8f851       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   97eab3c2bf8d7       dashboard-metrics-scraper-867fb5f87b-mvrjk             kubernetes-dashboard
	cbe3be0499b1e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   e5b36dc3eb8e7       storage-provisioner                                    kube-system
	54e4f5a34233f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   bd97ba7583b34       kubernetes-dashboard-b84665fb8-f9pn7                   kubernetes-dashboard
	c8f5a6ae9dd6d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   46d9ae3244828       busybox                                                default
	772b7ab9392d0       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           49 seconds ago      Running             coredns                     0                   6a93edef05a44       coredns-7d764666f9-wfv5r                               kube-system
	30fa25883572a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   e5b36dc3eb8e7       storage-provisioner                                    kube-system
	def2adfb364c1       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           49 seconds ago      Running             kube-proxy                  0                   098c50160a75a       kube-proxy-6z4vt                                       kube-system
	37e5bbef1518a       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           49 seconds ago      Running             kindnet-cni                 0                   2b00bd8535b9e       kindnet-rd4dj                                          kube-system
	f7a05d26251e8       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           52 seconds ago      Running             kube-apiserver              0                   bcd0a5bbcca2d       kube-apiserver-default-k8s-diff-port-497722            kube-system
	05ea82c1bf330       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           52 seconds ago      Running             kube-scheduler              0                   a0f8671f048d1       kube-scheduler-default-k8s-diff-port-497722            kube-system
	0c8a5140613f4       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           52 seconds ago      Running             etcd                        0                   e7586553fd42d       etcd-default-k8s-diff-port-497722                      kube-system
	0f35c56ae0629       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           52 seconds ago      Running             kube-controller-manager     0                   a7f1d02239e1c       kube-controller-manager-default-k8s-diff-port-497722   kube-system
	
	
	==> coredns [772b7ab9392d0f203c9344bdcc4efd5006401e9b969058872cfb1d1c6c1826b6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49281 - 779 "HINFO IN 2340639938046049120.2113270316392716189. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016536351s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-497722
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-497722
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=default-k8s-diff-port-497722
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_36_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:36:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-497722
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:37:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:37:35 +0000   Sat, 27 Dec 2025 09:36:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:37:35 +0000   Sat, 27 Dec 2025 09:36:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:37:35 +0000   Sat, 27 Dec 2025 09:36:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:37:35 +0000   Sat, 27 Dec 2025 09:36:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-497722
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                e2fdc3b1-3a68-4551-be95-6955cffc1d64
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-wfv5r                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-default-k8s-diff-port-497722                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-rd4dj                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-497722             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-497722    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-6z4vt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-497722             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-mvrjk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-f9pn7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  107s  node-controller  Node default-k8s-diff-port-497722 event: Registered Node default-k8s-diff-port-497722 in Controller
	  Normal  RegisteredNode  47s   node-controller  Node default-k8s-diff-port-497722 event: Registered Node default-k8s-diff-port-497722 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [0c8a5140613f466c8107ca22c4400874507b9d96db9fc14bb7f9ecf967957942] <==
	{"level":"info","ts":"2025-12-27T09:37:03.370368Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:37:03.370400Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T09:37:03.370448Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T09:37:03.370716Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-27T09:37:03.370780Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T09:37:03.371034Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T09:37:03.661819Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:03.661873Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:03.661933Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:03.661944Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:37:03.661963Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:03.667909Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:03.667959Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:37:03.667983Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:03.667994Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:03.672554Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-497722 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:37:03.672595Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:37:03.672614Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:37:03.673997Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:37:03.698684Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:37:03.698783Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:37:03.718286Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:37:03.728486Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:37:03.729641Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-27T09:37:25.576914Z","caller":"traceutil/trace.go:172","msg":"trace[700906283] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"124.959216ms","start":"2025-12-27T09:37:25.451928Z","end":"2025-12-27T09:37:25.576887Z","steps":["trace[700906283] 'process raft request'  (duration: 124.768555ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:37:55 up  1:20,  0 user,  load average: 3.07, 3.08, 2.37
	Linux default-k8s-diff-port-497722 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [37e5bbef1518ab34328ea74c051dd01a3e28b525871fc9d8ccc080b6095d603d] <==
	I1227 09:37:06.376051       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:37:06.376408       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1227 09:37:06.376626       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:37:06.376661       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:37:06.376692       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:37:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:37:06.579322       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:37:06.579407       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:37:06.579419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:37:06.580967       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 09:37:36.579700       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 09:37:36.580875       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 09:37:36.580925       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 09:37:36.580925       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1227 09:37:37.680777       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:37:37.680884       1 metrics.go:72] Registering metrics
	I1227 09:37:37.680967       1 controller.go:711] "Syncing nftables rules"
	I1227 09:37:46.579558       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 09:37:46.579617       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f7a05d26251e8a9f4091116e036a79a8654a182636649e96285fc252c0530199] <==
	I1227 09:37:05.110973       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 09:37:05.111145       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 09:37:05.111161       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 09:37:05.111310       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 09:37:05.111363       1 aggregator.go:187] initial CRD sync complete...
	I1227 09:37:05.111380       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 09:37:05.111386       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 09:37:05.111395       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:37:05.111180       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:37:05.113879       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 09:37:05.117360       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:05.117469       1 policy_source.go:248] refreshing policies
	E1227 09:37:05.118692       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 09:37:05.150460       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:37:05.530311       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:37:05.566198       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:37:05.588262       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:37:05.596429       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:37:05.604314       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:37:05.651293       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.208.206"}
	I1227 09:37:05.672931       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.149.77"}
	I1227 09:37:06.007301       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:37:08.765422       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:37:08.861972       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:37:08.961738       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0f35c56ae0629feaaf5192c69a9a652f101e67591c4c93f500daa6ceb2a62911] <==
	I1227 09:37:08.267758       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.267868       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.267964       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268089       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268132       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268284       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268363       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268387       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268407       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268422       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268841       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268879       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268955       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268962       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.269029       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.269226       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.269344       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.269364       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.274289       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:08.275625       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.369436       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.369452       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:37:08.369456       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:37:08.374683       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.965584       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [def2adfb364c1d7110ce2224f01de177f744737fe0b1a459d15cf21d75aa4b3c] <==
	I1227 09:37:06.245213       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:37:06.324370       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:06.425015       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:06.425053       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1227 09:37:06.425149       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:37:06.449105       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:37:06.449187       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:37:06.455709       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:37:06.456219       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:37:06.456310       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:37:06.458577       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:37:06.458950       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:37:06.458644       1 config.go:309] "Starting node config controller"
	I1227 09:37:06.459128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:37:06.459342       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:37:06.458885       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:37:06.459461       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:37:06.458717       1 config.go:200] "Starting service config controller"
	I1227 09:37:06.459752       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:37:06.559274       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:37:06.560855       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:37:06.560896       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [05ea82c1bf330e30a293e6d7ea1c01b1766ebce50eeaf8868bbcf622fd71d8e8] <==
	I1227 09:37:03.798933       1 serving.go:386] Generated self-signed cert in-memory
	W1227 09:37:05.053294       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:37:05.053326       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:37:05.053339       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:37:05.053349       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:37:05.115752       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 09:37:05.115787       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:37:05.118677       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:37:05.118726       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:05.118875       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 09:37:05.119111       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 09:37:05.219889       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:37:22 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:22.705759     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-mvrjk_kubernetes-dashboard(13912b50-df1b-456e-9c7d-9c12b2a4c3bb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" podUID="13912b50-df1b-456e-9c7d-9c12b2a4c3bb"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:23.802136     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:23.802185     732 scope.go:122] "RemoveContainer" containerID="37b4a11f3cffff06b0ca4265921cfdc34c44d2dd49c9e5a292fc629279902447"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:23.912175     732 scope.go:122] "RemoveContainer" containerID="37b4a11f3cffff06b0ca4265921cfdc34c44d2dd49c9e5a292fc629279902447"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:23.912395     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:23.912429     732 scope.go:122] "RemoveContainer" containerID="06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:23.912632     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-mvrjk_kubernetes-dashboard(13912b50-df1b-456e-9c7d-9c12b2a4c3bb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" podUID="13912b50-df1b-456e-9c7d-9c12b2a4c3bb"
	Dec 27 09:37:32 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:32.705518     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:32 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:32.705568     732 scope.go:122] "RemoveContainer" containerID="06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187"
	Dec 27 09:37:32 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:32.705784     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-mvrjk_kubernetes-dashboard(13912b50-df1b-456e-9c7d-9c12b2a4c3bb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" podUID="13912b50-df1b-456e-9c7d-9c12b2a4c3bb"
	Dec 27 09:37:36 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:36.947821     732 scope.go:122] "RemoveContainer" containerID="30fa25883572a506bd41a669b17278222880dcd3554abbe51f237350512fd65e"
	Dec 27 09:37:39 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:39.706000     732 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfv5r" containerName="coredns"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:45.802204     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:45.802254     732 scope.go:122] "RemoveContainer" containerID="06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:45.973975     732 scope.go:122] "RemoveContainer" containerID="06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:45.974186     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:45.974222     732 scope.go:122] "RemoveContainer" containerID="a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:45.974404     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-mvrjk_kubernetes-dashboard(13912b50-df1b-456e-9c7d-9c12b2a4c3bb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" podUID="13912b50-df1b-456e-9c7d-9c12b2a4c3bb"
	Dec 27 09:37:52 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:52.704887     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:52 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:52.704940     732 scope.go:122] "RemoveContainer" containerID="a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964"
	Dec 27 09:37:52 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:52.705164     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-mvrjk_kubernetes-dashboard(13912b50-df1b-456e-9c7d-9c12b2a4c3bb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" podUID="13912b50-df1b-456e-9c7d-9c12b2a4c3bb"
	Dec 27 09:37:53 default-k8s-diff-port-497722 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:37:53 default-k8s-diff-port-497722 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:37:53 default-k8s-diff-port-497722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:37:53 default-k8s-diff-port-497722 systemd[1]: kubelet.service: Consumed 1.742s CPU time.
	
	
	==> kubernetes-dashboard [54e4f5a34233f7629865a95e361beb06cbd770029b4c9e51b5593b268204cffc] <==
	2025/12/27 09:37:16 Starting overwatch
	2025/12/27 09:37:16 Using namespace: kubernetes-dashboard
	2025/12/27 09:37:16 Using in-cluster config to connect to apiserver
	2025/12/27 09:37:16 Using secret token for csrf signing
	2025/12/27 09:37:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 09:37:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 09:37:16 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 09:37:16 Generating JWE encryption key
	2025/12/27 09:37:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 09:37:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 09:37:17 Initializing JWE encryption key from synchronized object
	2025/12/27 09:37:17 Creating in-cluster Sidecar client
	2025/12/27 09:37:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 09:37:17 Serving insecurely on HTTP port: 9090
	2025/12/27 09:37:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [30fa25883572a506bd41a669b17278222880dcd3554abbe51f237350512fd65e] <==
	I1227 09:37:06.213062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 09:37:36.215302       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cbe3be0499b1e292ff93ccb690ac42127d4b7da2ccfb910a73b5aa34c1bbfa2c] <==
	I1227 09:37:37.021267       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:37:37.030288       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:37:37.030337       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 09:37:37.032505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:40.488374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:44.748668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:48.347654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:51.433776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:54.456705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:54.461823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:54.461993       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:37:54.462185       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-497722_bee666ce-bfab-4fe6-8bce-9cbf3cdf7e5a!
	I1227 09:37:54.462183       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb43f7f4-07db-4bad-82fa-044874eea265", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-497722_bee666ce-bfab-4fe6-8bce-9cbf3cdf7e5a became leader
	W1227 09:37:54.466218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:54.471618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:54.563306       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-497722_bee666ce-bfab-4fe6-8bce-9cbf3cdf7e5a!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722: exit status 2 (344.35831ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-497722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-497722
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-497722:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288",
	        "Created": "2025-12-27T09:35:53.140774946Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 631890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:36:56.395973569Z",
	            "FinishedAt": "2025-12-27T09:36:54.268457697Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/hostname",
	        "HostsPath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/hosts",
	        "LogPath": "/var/lib/docker/containers/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288/69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288-json.log",
	        "Name": "/default-k8s-diff-port-497722",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-497722:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-497722",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69d33a148b7ca9ade22e9948d8b57467a62cb59e5e045051bc08113923c77288",
	                "LowerDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a-init/diff:/var/lib/docker/overlay2/8c7e7d4b0a6e752d3b033998593f18412fe880cdae3fca36ce4655d5d5dd6a34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b457a702b8ced050755c1f6a8fdb882cc79faef36a30cb52139dd971b84f61a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-497722",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-497722/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-497722",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-497722",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-497722",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bdf3bd411840e79ffa85ddd2cdcf7664c7a9612af15aca0529d23d654d5b90dc",
	            "SandboxKey": "/var/run/docker/netns/bdf3bd411840",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-497722": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3e6df945c4cd8ddce46059c83cc7bed3e6a73494a8b947e27d2c18ad8eacf919",
	                    "EndpointID": "e8aa79f3440b532c08e945e346b16babaa3380c5634f492816d85115cd823daa",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "82:ad:5a:7b:85:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-497722",
	                        "69d33a148b7c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722: exit status 2 (361.605846ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-497722 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-497722 logs -n 25: (1.202194575s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ addons  │ enable dashboard -p no-preload-963457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p old-k8s-version-094398                                                                                                                                                                                                                     │ old-k8s-version-094398       │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-497722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:36 UTC │ 27 Dec 25 09:37 UTC │
	│ image   │ embed-certs-912564 image list --format=json                                                                                                                                                                                                   │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p embed-certs-912564 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-246956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-246956 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p embed-certs-912564                                                                                                                                                                                                                         │ embed-certs-912564           │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p auto-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-157923                  │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-246956 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ image   │ newest-cni-246956 image list --format=json                                                                                                                                                                                                    │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p newest-cni-246956 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ delete  │ -p newest-cni-246956                                                                                                                                                                                                                          │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ delete  │ -p newest-cni-246956                                                                                                                                                                                                                          │ newest-cni-246956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ start   │ -p kindnet-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-157923               │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ image   │ no-preload-963457 image list --format=json                                                                                                                                                                                                    │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p no-preload-963457 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-963457            │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	│ image   │ default-k8s-diff-port-497722 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │ 27 Dec 25 09:37 UTC │
	│ pause   │ -p default-k8s-diff-port-497722 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-497722 │ jenkins │ v1.37.0 │ 27 Dec 25 09:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:37:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:37:46.864671  647990 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:46.864827  647990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:46.864841  647990 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:46.864848  647990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:46.865083  647990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:37:46.865565  647990 out.go:368] Setting JSON to false
	I1227 09:37:46.866843  647990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4811,"bootTime":1766823456,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:37:46.866917  647990 start.go:143] virtualization: kvm guest
	I1227 09:37:46.868763  647990 out.go:179] * [kindnet-157923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:37:46.869915  647990 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:37:46.869935  647990 notify.go:221] Checking for updates...
	I1227 09:37:46.872147  647990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:37:46.873206  647990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:37:46.874198  647990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:37:46.875076  647990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:37:46.876025  647990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:37:46.877293  647990 config.go:182] Loaded profile config "auto-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877377  647990 config.go:182] Loaded profile config "default-k8s-diff-port-497722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877469  647990 config.go:182] Loaded profile config "no-preload-963457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:46.877582  647990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:37:46.900930  647990 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:37:46.901023  647990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:46.956322  647990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:37:46.946538862 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:46.956429  647990 docker.go:319] overlay module found
	I1227 09:37:46.957771  647990 out.go:179] * Using the docker driver based on user configuration
	I1227 09:37:46.958739  647990 start.go:309] selected driver: docker
	I1227 09:37:46.958754  647990 start.go:928] validating driver "docker" against <nil>
	I1227 09:37:46.958765  647990 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:37:46.959277  647990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:47.014097  647990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 09:37:47.004076494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:37:47.014262  647990 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:37:47.014493  647990 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:37:47.015832  647990 out.go:179] * Using Docker driver with root privileges
	I1227 09:37:47.016778  647990 cni.go:84] Creating CNI manager for "kindnet"
	I1227 09:37:47.016807  647990 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:37:47.016889  647990 start.go:353] cluster config:
	{Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:47.018032  647990 out.go:179] * Starting "kindnet-157923" primary control-plane node in "kindnet-157923" cluster
	I1227 09:37:47.018884  647990 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:37:47.019947  647990 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:37:47.021519  647990 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:47.021557  647990 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:37:47.021566  647990 cache.go:65] Caching tarball of preloaded images
	I1227 09:37:47.021665  647990 preload.go:251] Found /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 09:37:47.021690  647990 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:37:47.021942  647990 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:37:47.022225  647990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/config.json ...
	I1227 09:37:47.022262  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/config.json: {Name:mkc9489786022a3c521e082ba47d43b09ee5c209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:47.043904  647990 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:37:47.043922  647990 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:37:47.043937  647990 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:37:47.043970  647990 start.go:360] acquireMachinesLock for kindnet-157923: {Name:mk5cf38a4c59f5d9a1319baf127d324f7051b88d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:37:47.044057  647990 start.go:364] duration metric: took 73.03µs to acquireMachinesLock for "kindnet-157923"
	I1227 09:37:47.044079  647990 start.go:93] Provisioning new machine with config: &{Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:37:47.044147  647990 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:37:44.467035  640477 addons.go:530] duration metric: took 525.080113ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 09:37:44.738293  640477 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-157923" context rescaled to 1 replicas
	W1227 09:37:46.238781  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	W1227 09:37:48.238941  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	I1227 09:37:47.045517  647990 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:37:47.045710  647990 start.go:159] libmachine.API.Create for "kindnet-157923" (driver="docker")
	I1227 09:37:47.045736  647990 client.go:173] LocalClient.Create starting
	I1227 09:37:47.045836  647990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem
	I1227 09:37:47.045874  647990 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:47.045891  647990 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:47.045952  647990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem
	I1227 09:37:47.045975  647990 main.go:144] libmachine: Decoding PEM data...
	I1227 09:37:47.045984  647990 main.go:144] libmachine: Parsing certificate...
	I1227 09:37:47.046348  647990 cli_runner.go:164] Run: docker network inspect kindnet-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:37:47.063570  647990 cli_runner.go:211] docker network inspect kindnet-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:37:47.063648  647990 network_create.go:284] running [docker network inspect kindnet-157923] to gather additional debugging logs...
	I1227 09:37:47.063669  647990 cli_runner.go:164] Run: docker network inspect kindnet-157923
	W1227 09:37:47.079738  647990 cli_runner.go:211] docker network inspect kindnet-157923 returned with exit code 1
	I1227 09:37:47.079765  647990 network_create.go:287] error running [docker network inspect kindnet-157923]: docker network inspect kindnet-157923: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-157923 not found
	I1227 09:37:47.079776  647990 network_create.go:289] output of [docker network inspect kindnet-157923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-157923 not found
	
	** /stderr **
	I1227 09:37:47.079880  647990 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:47.096505  647990 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
	I1227 09:37:47.097278  647990 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-21a699476be6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:e8:d9:95:e6:36} reservation:<nil>}
	I1227 09:37:47.097729  647990 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e97c5356905 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:d9:6b:42:f5:e3} reservation:<nil>}
	I1227 09:37:47.098498  647990 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d83dd0}
	I1227 09:37:47.098520  647990 network_create.go:124] attempt to create docker network kindnet-157923 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:37:47.098594  647990 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-157923 kindnet-157923
	I1227 09:37:47.148980  647990 network_create.go:108] docker network kindnet-157923 192.168.76.0/24 created
	I1227 09:37:47.149015  647990 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-157923" container
	I1227 09:37:47.149100  647990 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:37:47.166970  647990 cli_runner.go:164] Run: docker volume create kindnet-157923 --label name.minikube.sigs.k8s.io=kindnet-157923 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:37:47.185066  647990 oci.go:103] Successfully created a docker volume kindnet-157923
	I1227 09:37:47.185135  647990 cli_runner.go:164] Run: docker run --rm --name kindnet-157923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-157923 --entrypoint /usr/bin/test -v kindnet-157923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:37:47.721086  647990 oci.go:107] Successfully prepared a docker volume kindnet-157923
	I1227 09:37:47.721191  647990 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:47.721212  647990 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:37:47.721327  647990 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:37:51.492743  647990 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-157923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.771349551s)
	I1227 09:37:51.492802  647990 kic.go:203] duration metric: took 3.771572095s to extract preloaded images to volume ...
	W1227 09:37:51.492907  647990 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 09:37:51.492986  647990 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 09:37:51.493040  647990 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:37:51.548737  647990 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-157923 --name kindnet-157923 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-157923 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-157923 --network kindnet-157923 --ip 192.168.76.2 --volume kindnet-157923:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:37:51.804564  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Running}}
	I1227 09:37:51.823570  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Status}}
	I1227 09:37:51.841753  647990 cli_runner.go:164] Run: docker exec kindnet-157923 stat /var/lib/dpkg/alternatives/iptables
	W1227 09:37:50.284486  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	W1227 09:37:52.739006  640477 node_ready.go:57] node "auto-157923" has "Ready":"False" status (will retry)
	I1227 09:37:51.887055  647990 oci.go:144] the created container "kindnet-157923" has a running status.
	I1227 09:37:51.887089  647990 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa...
	I1227 09:37:51.990591  647990 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:37:52.019096  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Status}}
	I1227 09:37:52.044443  647990 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:37:52.044486  647990 kic_runner.go:114] Args: [docker exec --privileged kindnet-157923 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:37:52.088186  647990 cli_runner.go:164] Run: docker container inspect kindnet-157923 --format={{.State.Status}}
	I1227 09:37:52.114053  647990 machine.go:94] provisionDockerMachine start ...
	I1227 09:37:52.114164  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.136116  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.136464  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.136486  647990 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:37:52.273019  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: kindnet-157923
	
	I1227 09:37:52.273050  647990 ubuntu.go:182] provisioning hostname "kindnet-157923"
	I1227 09:37:52.273117  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.292545  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.292899  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.292924  647990 main.go:144] libmachine: About to run SSH command:
	sudo hostname kindnet-157923 && echo "kindnet-157923" | sudo tee /etc/hostname
	I1227 09:37:52.435204  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: kindnet-157923
	
	I1227 09:37:52.435289  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.454377  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.454614  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.454641  647990 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-157923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-157923/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-157923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:37:52.585466  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:37:52.585495  647990 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-373581/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-373581/.minikube}
	I1227 09:37:52.585537  647990 ubuntu.go:190] setting up certificates
	I1227 09:37:52.585558  647990 provision.go:84] configureAuth start
	I1227 09:37:52.585620  647990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-157923
	I1227 09:37:52.606953  647990 provision.go:143] copyHostCerts
	I1227 09:37:52.607022  647990 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem, removing ...
	I1227 09:37:52.607041  647990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem
	I1227 09:37:52.607124  647990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/ca.pem (1082 bytes)
	I1227 09:37:52.607237  647990 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem, removing ...
	I1227 09:37:52.607250  647990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem
	I1227 09:37:52.607292  647990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/cert.pem (1123 bytes)
	I1227 09:37:52.607371  647990 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem, removing ...
	I1227 09:37:52.607382  647990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem
	I1227 09:37:52.607433  647990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-373581/.minikube/key.pem (1679 bytes)
	I1227 09:37:52.607502  647990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem org=jenkins.kindnet-157923 san=[127.0.0.1 192.168.76.2 kindnet-157923 localhost minikube]
	I1227 09:37:52.773980  647990 provision.go:177] copyRemoteCerts
	I1227 09:37:52.774032  647990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:37:52.774076  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.794674  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:52.891600  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:37:52.910230  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:37:52.927214  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1227 09:37:52.944338  647990 provision.go:87] duration metric: took 358.764637ms to configureAuth
	I1227 09:37:52.944365  647990 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:37:52.944517  647990 config.go:182] Loaded profile config "kindnet-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:52.944628  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:52.963064  647990 main.go:144] libmachine: Using SSH client type: native
	I1227 09:37:52.963415  647990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1227 09:37:52.963445  647990 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:37:53.246993  647990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:37:53.247019  647990 machine.go:97] duration metric: took 1.13294086s to provisionDockerMachine
	I1227 09:37:53.247032  647990 client.go:176] duration metric: took 6.201286787s to LocalClient.Create
	I1227 09:37:53.247049  647990 start.go:167] duration metric: took 6.201338369s to libmachine.API.Create "kindnet-157923"
	I1227 09:37:53.247059  647990 start.go:293] postStartSetup for "kindnet-157923" (driver="docker")
	I1227 09:37:53.247070  647990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:37:53.247143  647990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:37:53.247196  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.266249  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.361547  647990 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:37:53.365069  647990 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:37:53.365103  647990 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:37:53.365132  647990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/addons for local assets ...
	I1227 09:37:53.365203  647990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-373581/.minikube/files for local assets ...
	I1227 09:37:53.365317  647990 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem -> 3771712.pem in /etc/ssl/certs
	I1227 09:37:53.365452  647990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:37:53.372927  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:53.391845  647990 start.go:296] duration metric: took 144.772765ms for postStartSetup
	I1227 09:37:53.392155  647990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-157923
	I1227 09:37:53.411616  647990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/config.json ...
	I1227 09:37:53.411887  647990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:37:53.411930  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.428165  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.516235  647990 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:37:53.521533  647990 start.go:128] duration metric: took 6.477368164s to createHost
	I1227 09:37:53.521559  647990 start.go:83] releasing machines lock for "kindnet-157923", held for 6.477490775s
	I1227 09:37:53.521631  647990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-157923
	I1227 09:37:53.540592  647990 ssh_runner.go:195] Run: cat /version.json
	I1227 09:37:53.540649  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.540666  647990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:37:53.540740  647990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-157923
	I1227 09:37:53.560072  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.560339  647990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/kindnet-157923/id_rsa Username:docker}
	I1227 09:37:53.646557  647990 ssh_runner.go:195] Run: systemctl --version
	I1227 09:37:53.703207  647990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:37:53.740466  647990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:37:53.746989  647990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:37:53.747094  647990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:37:53.777411  647990 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 09:37:53.777440  647990 start.go:496] detecting cgroup driver to use...
	I1227 09:37:53.777469  647990 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 09:37:53.777516  647990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:37:53.798340  647990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:37:53.810539  647990 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:37:53.810603  647990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:37:53.826933  647990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:37:53.843174  647990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:37:53.943482  647990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:37:54.039822  647990 docker.go:234] disabling docker service ...
	I1227 09:37:54.039894  647990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:37:54.059526  647990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:37:54.073514  647990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:37:54.160971  647990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:37:54.241500  647990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:37:54.254824  647990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:37:54.269456  647990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:37:54.269503  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.285951  647990 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:37:54.286035  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.297566  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.308615  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.318262  647990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:37:54.326619  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.335391  647990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.348955  647990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:37:54.358248  647990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:37:54.366258  647990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:37:54.374340  647990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:54.463096  647990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:37:54.617694  647990 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:37:54.617767  647990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:37:54.621607  647990 start.go:574] Will wait 60s for crictl version
	I1227 09:37:54.621659  647990 ssh_runner.go:195] Run: which crictl
	I1227 09:37:54.625375  647990 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:37:54.650450  647990 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:37:54.650531  647990 ssh_runner.go:195] Run: crio --version
	I1227 09:37:54.685299  647990 ssh_runner.go:195] Run: crio --version
	I1227 09:37:54.721668  647990 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:37:54.722882  647990 cli_runner.go:164] Run: docker network inspect kindnet-157923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:37:54.743659  647990 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:37:54.748134  647990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:54.759743  647990 kubeadm.go:884] updating cluster {Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:37:54.759929  647990 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:37:54.759989  647990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:54.800025  647990 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:54.800045  647990 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:37:54.800087  647990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:37:54.828967  647990 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:37:54.828993  647990 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:37:54.829002  647990 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 09:37:54.829112  647990 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-157923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1227 09:37:54.829214  647990 ssh_runner.go:195] Run: crio config
	I1227 09:37:54.887057  647990 cni.go:84] Creating CNI manager for "kindnet"
	I1227 09:37:54.887091  647990 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:37:54.887123  647990 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-157923 NodeName:kindnet-157923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:37:54.887285  647990 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-157923"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:37:54.887345  647990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:37:54.896116  647990 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:37:54.896181  647990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:37:54.905443  647990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1227 09:37:54.920554  647990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:37:54.936273  647990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1227 09:37:54.950650  647990 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:37:54.954244  647990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:37:54.964663  647990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:37:55.065441  647990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:37:55.087499  647990 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923 for IP: 192.168.76.2
	I1227 09:37:55.087519  647990 certs.go:195] generating shared ca certs ...
	I1227 09:37:55.087537  647990 certs.go:227] acquiring lock for ca certs: {Name:mkab52f7e7387ca5a3a8b54d27e85c3698ea95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.087707  647990 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key
	I1227 09:37:55.087781  647990 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key
	I1227 09:37:55.087806  647990 certs.go:257] generating profile certs ...
	I1227 09:37:55.087889  647990 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.key
	I1227 09:37:55.087916  647990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.crt with IP's: []
	I1227 09:37:55.206241  647990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.crt ...
	I1227 09:37:55.206263  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.crt: {Name:mk4b27371e040b28c42b03f162405c1098913b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.206398  647990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.key ...
	I1227 09:37:55.206409  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/client.key: {Name:mkf82ad75ce77023e0b0b356b51bb304b0e5c28f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.206494  647990 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key.569afa2f
	I1227 09:37:55.206507  647990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt.569afa2f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:37:55.370250  647990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt.569afa2f ...
	I1227 09:37:55.370276  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt.569afa2f: {Name:mkab6a3a74ba9d3018d466bde2b041326b2d56ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.370456  647990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key.569afa2f ...
	I1227 09:37:55.370480  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key.569afa2f: {Name:mk3d9f655a6c22cab3f504a60443de00f5b6d230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.370596  647990 certs.go:382] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt.569afa2f -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt
	I1227 09:37:55.370716  647990 certs.go:386] copying /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key.569afa2f -> /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key
	I1227 09:37:55.370834  647990 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.key
	I1227 09:37:55.370857  647990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.crt with IP's: []
	I1227 09:37:55.404908  647990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.crt ...
	I1227 09:37:55.404936  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.crt: {Name:mk1c886e2c1537ccc3cb9b9a86fad437a4209670 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.405118  647990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.key ...
	I1227 09:37:55.405145  647990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.key: {Name:mk583afa8babce870fb3b7c9a396740ac2bba309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:37:55.405410  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem (1338 bytes)
	W1227 09:37:55.405465  647990 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171_empty.pem, impossibly tiny 0 bytes
	I1227 09:37:55.405481  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:37:55.405516  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:37:55.405552  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:37:55.405592  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/certs/key.pem (1679 bytes)
	I1227 09:37:55.405660  647990 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem (1708 bytes)
	I1227 09:37:55.406283  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:37:55.427179  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:37:55.447038  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:37:55.466089  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:37:55.486256  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 09:37:55.505373  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:37:55.524526  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:37:55.544635  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/kindnet-157923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:37:55.565595  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:37:55.588733  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/certs/377171.pem --> /usr/share/ca-certificates/377171.pem (1338 bytes)
	I1227 09:37:55.608351  647990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/ssl/certs/3771712.pem --> /usr/share/ca-certificates/3771712.pem (1708 bytes)
	I1227 09:37:55.628017  647990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:37:55.642193  647990 ssh_runner.go:195] Run: openssl version
	I1227 09:37:55.649648  647990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:55.659190  647990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:37:55.668559  647990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:55.672721  647990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:06 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:55.672775  647990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:37:55.713639  647990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:37:55.722406  647990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:37:55.730264  647990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/377171.pem
	I1227 09:37:55.738297  647990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/377171.pem /etc/ssl/certs/377171.pem
	I1227 09:37:55.746031  647990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/377171.pem
	I1227 09:37:55.749594  647990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:09 /usr/share/ca-certificates/377171.pem
	I1227 09:37:55.749644  647990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/377171.pem
	I1227 09:37:55.789889  647990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:37:55.798174  647990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/377171.pem /etc/ssl/certs/51391683.0
	I1227 09:37:55.807278  647990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3771712.pem
	I1227 09:37:55.816048  647990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3771712.pem /etc/ssl/certs/3771712.pem
	I1227 09:37:55.824201  647990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3771712.pem
	I1227 09:37:55.832745  647990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:09 /usr/share/ca-certificates/3771712.pem
	I1227 09:37:55.832838  647990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3771712.pem
	I1227 09:37:55.878457  647990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:55.887849  647990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3771712.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:37:55.897027  647990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:37:55.901326  647990 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:37:55.901404  647990 kubeadm.go:401] StartCluster: {Name:kindnet-157923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-157923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:55.901490  647990 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:37:55.901543  647990 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:37:55.935869  647990 cri.go:96] found id: ""
	I1227 09:37:55.935939  647990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:37:55.944874  647990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:37:55.953265  647990 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:37:55.953400  647990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:37:55.961985  647990 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:37:55.962004  647990 kubeadm.go:158] found existing configuration files:
	
	I1227 09:37:55.962056  647990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:37:55.971658  647990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:37:55.971720  647990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:37:55.981187  647990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:37:55.990532  647990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:37:55.990606  647990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:37:56.000915  647990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:37:56.010527  647990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:37:56.010583  647990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:37:56.021424  647990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:37:56.030746  647990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:37:56.030839  647990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:37:56.038200  647990 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:37:56.081534  647990 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:37:56.081592  647990 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:37:56.161036  647990 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:37:56.161132  647990 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 09:37:56.161178  647990 kubeadm.go:319] OS: Linux
	I1227 09:37:56.161237  647990 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:37:56.161297  647990 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:37:56.161362  647990 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:37:56.161422  647990 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:37:56.161483  647990 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:37:56.161689  647990 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:37:56.161768  647990 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:37:56.161883  647990 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 09:37:56.224744  647990 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:37:56.224938  647990 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:37:56.225063  647990 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:37:56.232440  647990 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:37:56.234255  647990 out.go:252]   - Generating certificates and keys ...
	I1227 09:37:56.234368  647990 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:37:56.234477  647990 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:37:56.373544  647990 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:37:56.427737  647990 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:37:56.526626  647990 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:37:56.591448  647990 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:37:56.778408  647990 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:37:56.778602  647990 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-157923 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:37:56.805353  647990 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:37:56.805463  647990 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-157923 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:37:56.861596  647990 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	
	
	==> CRI-O <==
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.812273517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.812954179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.848248594Z" level=info msg="Created container a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk/dashboard-metrics-scraper" id=924a81c9-499e-4c8c-b584-a1aad10b8a56 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.848908135Z" level=info msg="Starting container: a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964" id=dcddc79c-3d47-4f2c-8945-ac52a88a4975 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.851001862Z" level=info msg="Started container" PID=1755 containerID=a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk/dashboard-metrics-scraper id=dcddc79c-3d47-4f2c-8945-ac52a88a4975 name=/runtime.v1.RuntimeService/StartContainer sandboxID=97eab3c2bf8d727c3dc446ab290e6f73cee8ff36a2eae9d478616b9c1c9c756f
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.975269999Z" level=info msg="Removing container: 06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187" id=f3d84bf0-be0c-4b22-adf1-b46286b59e83 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:45 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:45.987418074Z" level=info msg="Removed container 06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk/dashboard-metrics-scraper" id=f3d84bf0-be0c-4b22-adf1-b46286b59e83 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.579912375Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.584387966Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.584412546Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.584430056Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.588123573Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.58814912Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.588166274Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.591585682Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.591607136Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.591622981Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.594994394Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.595017666Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.595034421Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.598327483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.598347228Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.598363449Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.602099154Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:37:46 default-k8s-diff-port-497722 crio[568]: time="2025-12-27T09:37:46.602127929Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a14b289c8f851       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   97eab3c2bf8d7       dashboard-metrics-scraper-867fb5f87b-mvrjk             kubernetes-dashboard
	cbe3be0499b1e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   e5b36dc3eb8e7       storage-provisioner                                    kube-system
	54e4f5a34233f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   bd97ba7583b34       kubernetes-dashboard-b84665fb8-f9pn7                   kubernetes-dashboard
	c8f5a6ae9dd6d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   46d9ae3244828       busybox                                                default
	772b7ab9392d0       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           51 seconds ago      Running             coredns                     0                   6a93edef05a44       coredns-7d764666f9-wfv5r                               kube-system
	30fa25883572a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   e5b36dc3eb8e7       storage-provisioner                                    kube-system
	def2adfb364c1       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           51 seconds ago      Running             kube-proxy                  0                   098c50160a75a       kube-proxy-6z4vt                                       kube-system
	37e5bbef1518a       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           51 seconds ago      Running             kindnet-cni                 0                   2b00bd8535b9e       kindnet-rd4dj                                          kube-system
	f7a05d26251e8       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           54 seconds ago      Running             kube-apiserver              0                   bcd0a5bbcca2d       kube-apiserver-default-k8s-diff-port-497722            kube-system
	05ea82c1bf330       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           54 seconds ago      Running             kube-scheduler              0                   a0f8671f048d1       kube-scheduler-default-k8s-diff-port-497722            kube-system
	0c8a5140613f4       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           54 seconds ago      Running             etcd                        0                   e7586553fd42d       etcd-default-k8s-diff-port-497722                      kube-system
	0f35c56ae0629       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           54 seconds ago      Running             kube-controller-manager     0                   a7f1d02239e1c       kube-controller-manager-default-k8s-diff-port-497722   kube-system
	
	
	==> coredns [772b7ab9392d0f203c9344bdcc4efd5006401e9b969058872cfb1d1c6c1826b6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49281 - 779 "HINFO IN 2340639938046049120.2113270316392716189. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016536351s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-497722
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-497722
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=default-k8s-diff-port-497722
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_36_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:36:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-497722
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:37:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:37:35 +0000   Sat, 27 Dec 2025 09:36:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:37:35 +0000   Sat, 27 Dec 2025 09:36:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:37:35 +0000   Sat, 27 Dec 2025 09:36:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:37:35 +0000   Sat, 27 Dec 2025 09:36:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-497722
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                e2fdc3b1-3a68-4551-be95-6955cffc1d64
	  Boot ID:                    38891013-55ac-4a66-aef6-0b0711ecc60c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-wfv5r                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-default-k8s-diff-port-497722                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-rd4dj                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-497722             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-497722    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-6z4vt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-497722             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-mvrjk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-f9pn7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node default-k8s-diff-port-497722 event: Registered Node default-k8s-diff-port-497722 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node default-k8s-diff-port-497722 event: Registered Node default-k8s-diff-port-497722 in Controller
	
	
	==> dmesg <==
	[Dec27 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[ +15.308804] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e ed 76 f4 5a 59 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ce 76 ea 00 16 08 06
	[  +1.257710] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[Dec27 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de f8 3b c4 47 35 08 06
	[  +0.002052] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	[ +15.582399] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 ce 0c 6c 86 d2 08 06
	[  +0.000307] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 30 8d 6e b6 d3 08 06
	[  +9.916919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e cc 0c a0 d8 19 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 01 72 f8 79 43 08 06
	[ +21.695394] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 bb be 81 50 ce 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de 0f d7 5a 4b 84 08 06
	
	
	==> etcd [0c8a5140613f466c8107ca22c4400874507b9d96db9fc14bb7f9ecf967957942] <==
	{"level":"info","ts":"2025-12-27T09:37:03.370368Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:37:03.370400Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T09:37:03.370448Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T09:37:03.370716Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-27T09:37:03.370780Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T09:37:03.371034Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T09:37:03.661819Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:03.661873Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:03.661933Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T09:37:03.661944Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:37:03.661963Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:03.667909Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:03.667959Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:37:03.667983Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:03.667994Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T09:37:03.672554Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-497722 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:37:03.672595Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:37:03.672614Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:37:03.673997Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:37:03.698684Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:37:03.698783Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:37:03.718286Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:37:03.728486Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:37:03.729641Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-27T09:37:25.576914Z","caller":"traceutil/trace.go:172","msg":"trace[700906283] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"124.959216ms","start":"2025-12-27T09:37:25.451928Z","end":"2025-12-27T09:37:25.576887Z","steps":["trace[700906283] 'process raft request'  (duration: 124.768555ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:37:58 up  1:20,  0 user,  load average: 2.98, 3.06, 2.36
	Linux default-k8s-diff-port-497722 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [37e5bbef1518ab34328ea74c051dd01a3e28b525871fc9d8ccc080b6095d603d] <==
	I1227 09:37:06.376051       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:37:06.376408       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1227 09:37:06.376626       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:37:06.376661       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:37:06.376692       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:37:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:37:06.579322       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:37:06.579407       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:37:06.579419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:37:06.580967       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 09:37:36.579700       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 09:37:36.580875       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 09:37:36.580925       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 09:37:36.580925       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1227 09:37:37.680777       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:37:37.680884       1 metrics.go:72] Registering metrics
	I1227 09:37:37.680967       1 controller.go:711] "Syncing nftables rules"
	I1227 09:37:46.579558       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 09:37:46.579617       1 main.go:301] handling current node
	I1227 09:37:56.582594       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 09:37:56.582666       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f7a05d26251e8a9f4091116e036a79a8654a182636649e96285fc252c0530199] <==
	I1227 09:37:05.110973       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 09:37:05.111145       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 09:37:05.111161       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 09:37:05.111310       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 09:37:05.111363       1 aggregator.go:187] initial CRD sync complete...
	I1227 09:37:05.111380       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 09:37:05.111386       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 09:37:05.111395       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:37:05.111180       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:37:05.113879       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 09:37:05.117360       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:05.117469       1 policy_source.go:248] refreshing policies
	E1227 09:37:05.118692       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 09:37:05.150460       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:37:05.530311       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 09:37:05.566198       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 09:37:05.588262       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:37:05.596429       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:37:05.604314       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 09:37:05.651293       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.208.206"}
	I1227 09:37:05.672931       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.149.77"}
	I1227 09:37:06.007301       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:37:08.765422       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 09:37:08.861972       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 09:37:08.961738       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0f35c56ae0629feaaf5192c69a9a652f101e67591c4c93f500daa6ceb2a62911] <==
	I1227 09:37:08.267758       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.267868       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.267964       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268089       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268132       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268284       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268363       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268387       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268407       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268422       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268841       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268879       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268955       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.268962       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.269029       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.269226       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.269344       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.269364       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.274289       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:08.275625       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.369436       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.369452       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:37:08.369456       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:37:08.374683       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:08.965584       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [def2adfb364c1d7110ce2224f01de177f744737fe0b1a459d15cf21d75aa4b3c] <==
	I1227 09:37:06.245213       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:37:06.324370       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:06.425015       1 shared_informer.go:377] "Caches are synced"
	I1227 09:37:06.425053       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1227 09:37:06.425149       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:37:06.449105       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:37:06.449187       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:37:06.455709       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:37:06.456219       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:37:06.456310       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:37:06.458577       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:37:06.458950       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:37:06.458644       1 config.go:309] "Starting node config controller"
	I1227 09:37:06.459128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:37:06.459342       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:37:06.458885       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:37:06.459461       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:37:06.458717       1 config.go:200] "Starting service config controller"
	I1227 09:37:06.459752       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:37:06.559274       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:37:06.560855       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:37:06.560896       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [05ea82c1bf330e30a293e6d7ea1c01b1766ebce50eeaf8868bbcf622fd71d8e8] <==
	I1227 09:37:03.798933       1 serving.go:386] Generated self-signed cert in-memory
	W1227 09:37:05.053294       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:37:05.053326       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:37:05.053339       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:37:05.053349       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:37:05.115752       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 09:37:05.115787       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:37:05.118677       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:37:05.118726       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:37:05.118875       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 09:37:05.119111       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 09:37:05.219889       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:37:22 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:22.705759     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-mvrjk_kubernetes-dashboard(13912b50-df1b-456e-9c7d-9c12b2a4c3bb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" podUID="13912b50-df1b-456e-9c7d-9c12b2a4c3bb"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:23.802136     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:23.802185     732 scope.go:122] "RemoveContainer" containerID="37b4a11f3cffff06b0ca4265921cfdc34c44d2dd49c9e5a292fc629279902447"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:23.912175     732 scope.go:122] "RemoveContainer" containerID="37b4a11f3cffff06b0ca4265921cfdc34c44d2dd49c9e5a292fc629279902447"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:23.912395     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:23.912429     732 scope.go:122] "RemoveContainer" containerID="06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187"
	Dec 27 09:37:23 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:23.912632     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-mvrjk_kubernetes-dashboard(13912b50-df1b-456e-9c7d-9c12b2a4c3bb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" podUID="13912b50-df1b-456e-9c7d-9c12b2a4c3bb"
	Dec 27 09:37:32 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:32.705518     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:32 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:32.705568     732 scope.go:122] "RemoveContainer" containerID="06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187"
	Dec 27 09:37:32 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:32.705784     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-mvrjk_kubernetes-dashboard(13912b50-df1b-456e-9c7d-9c12b2a4c3bb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" podUID="13912b50-df1b-456e-9c7d-9c12b2a4c3bb"
	Dec 27 09:37:36 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:36.947821     732 scope.go:122] "RemoveContainer" containerID="30fa25883572a506bd41a669b17278222880dcd3554abbe51f237350512fd65e"
	Dec 27 09:37:39 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:39.706000     732 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wfv5r" containerName="coredns"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:45.802204     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:45.802254     732 scope.go:122] "RemoveContainer" containerID="06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:45.973975     732 scope.go:122] "RemoveContainer" containerID="06c1182efc2f4a83f2a6b8a54c99b83972f25d87656601803c40085af5641187"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:45.974186     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:45.974222     732 scope.go:122] "RemoveContainer" containerID="a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964"
	Dec 27 09:37:45 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:45.974404     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-mvrjk_kubernetes-dashboard(13912b50-df1b-456e-9c7d-9c12b2a4c3bb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" podUID="13912b50-df1b-456e-9c7d-9c12b2a4c3bb"
	Dec 27 09:37:52 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:52.704887     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" containerName="dashboard-metrics-scraper"
	Dec 27 09:37:52 default-k8s-diff-port-497722 kubelet[732]: I1227 09:37:52.704940     732 scope.go:122] "RemoveContainer" containerID="a14b289c8f8518d9f61f525c74ad8885cda783871570ab9f26261ca1353ef964"
	Dec 27 09:37:52 default-k8s-diff-port-497722 kubelet[732]: E1227 09:37:52.705164     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-mvrjk_kubernetes-dashboard(13912b50-df1b-456e-9c7d-9c12b2a4c3bb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-mvrjk" podUID="13912b50-df1b-456e-9c7d-9c12b2a4c3bb"
	Dec 27 09:37:53 default-k8s-diff-port-497722 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:37:53 default-k8s-diff-port-497722 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:37:53 default-k8s-diff-port-497722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:37:53 default-k8s-diff-port-497722 systemd[1]: kubelet.service: Consumed 1.742s CPU time.
	
	
	==> kubernetes-dashboard [54e4f5a34233f7629865a95e361beb06cbd770029b4c9e51b5593b268204cffc] <==
	2025/12/27 09:37:16 Starting overwatch
	2025/12/27 09:37:16 Using namespace: kubernetes-dashboard
	2025/12/27 09:37:16 Using in-cluster config to connect to apiserver
	2025/12/27 09:37:16 Using secret token for csrf signing
	2025/12/27 09:37:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 09:37:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 09:37:16 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 09:37:16 Generating JWE encryption key
	2025/12/27 09:37:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 09:37:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 09:37:17 Initializing JWE encryption key from synchronized object
	2025/12/27 09:37:17 Creating in-cluster Sidecar client
	2025/12/27 09:37:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 09:37:17 Serving insecurely on HTTP port: 9090
	2025/12/27 09:37:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [30fa25883572a506bd41a669b17278222880dcd3554abbe51f237350512fd65e] <==
	I1227 09:37:06.213062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 09:37:36.215302       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cbe3be0499b1e292ff93ccb690ac42127d4b7da2ccfb910a73b5aa34c1bbfa2c] <==
	I1227 09:37:37.021267       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 09:37:37.030288       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 09:37:37.030337       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 09:37:37.032505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:40.488374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:44.748668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:48.347654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:51.433776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:54.456705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:54.461823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:54.461993       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 09:37:54.462185       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-497722_bee666ce-bfab-4fe6-8bce-9cbf3cdf7e5a!
	I1227 09:37:54.462183       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb43f7f4-07db-4bad-82fa-044874eea265", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-497722_bee666ce-bfab-4fe6-8bce-9cbf3cdf7e5a became leader
	W1227 09:37:54.466218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:54.471618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 09:37:54.563306       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-497722_bee666ce-bfab-4fe6-8bce-9cbf3cdf7e5a!
	W1227 09:37:56.475143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:37:56.479900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722: exit status 2 (366.566604ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-497722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.74s)

                                                
                                    

Test pass (279/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.07
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.35.0/json-events 9.27
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.07
18 TestDownloadOnly/v1.35.0/DeleteAll 0.21
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.4
21 TestBinaryMirror 0.79
22 TestOffline 51.52
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 100.38
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.41
48 TestAddons/StoppedEnableDisable 18.45
49 TestCertOptions 24.63
50 TestCertExpiration 209.33
52 TestForceSystemdFlag 27.43
53 TestForceSystemdEnv 22.29
58 TestErrorSpam/setup 15.43
59 TestErrorSpam/start 0.65
60 TestErrorSpam/status 0.96
61 TestErrorSpam/pause 5.39
62 TestErrorSpam/unpause 5.56
63 TestErrorSpam/stop 8.08
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 39.39
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.94
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.82
75 TestFunctional/serial/CacheCmd/cache/add_local 1.9
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.47
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 52.17
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.12
86 TestFunctional/serial/LogsFileCmd 1.15
87 TestFunctional/serial/InvalidService 5.48
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 17.26
91 TestFunctional/parallel/DryRun 0.4
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 0.95
97 TestFunctional/parallel/ServiceCmdConnect 8.54
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 20.71
101 TestFunctional/parallel/SSHCmd 0.63
102 TestFunctional/parallel/CpCmd 1.74
103 TestFunctional/parallel/MySQL 20.92
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.72
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
113 TestFunctional/parallel/License 0.32
114 TestFunctional/parallel/ServiceCmd/DeployApp 7.18
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.21
120 TestFunctional/parallel/ServiceCmd/List 0.5
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
123 TestFunctional/parallel/ServiceCmd/Format 0.34
124 TestFunctional/parallel/ServiceCmd/URL 0.34
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
132 TestFunctional/parallel/MountCmd/any-port 15.08
133 TestFunctional/parallel/ProfileCmd/profile_list 0.43
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 0.53
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.31
145 TestFunctional/parallel/ImageCommands/Setup 0.79
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.06
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.02
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
149 TestFunctional/parallel/MountCmd/specific-port 1.93
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.04
153 TestFunctional/parallel/MountCmd/VerifyCleanup 2.01
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 131.38
163 TestMultiControlPlane/serial/DeployApp 6.93
164 TestMultiControlPlane/serial/PingHostFromPods 1.01
165 TestMultiControlPlane/serial/AddWorkerNode 26.75
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
168 TestMultiControlPlane/serial/CopyFile 16.34
169 TestMultiControlPlane/serial/StopSecondaryNode 14.24
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.38
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 112.27
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.54
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
176 TestMultiControlPlane/serial/StopCluster 41.71
177 TestMultiControlPlane/serial/RestartCluster 49.82
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
179 TestMultiControlPlane/serial/AddSecondaryNode 72.1
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
185 TestJSONOutput/start/Command 39.64
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.94
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 28.33
211 TestKicCustomNetwork/use_default_bridge_network 18.33
212 TestKicExistingNetwork 19.56
213 TestKicCustomSubnet 22.72
214 TestKicStaticIP 23.23
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 41.02
219 TestMountStart/serial/StartWithMountFirst 7.45
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 4.7
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.67
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.74
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 61.83
231 TestMultiNode/serial/DeployApp2Nodes 4.21
232 TestMultiNode/serial/PingHostFrom2Pods 0.69
233 TestMultiNode/serial/AddNode 27.69
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.63
236 TestMultiNode/serial/CopyFile 9.27
237 TestMultiNode/serial/StopNode 2.24
238 TestMultiNode/serial/StartAfterStop 6.95
239 TestMultiNode/serial/RestartKeepsNodes 59.56
240 TestMultiNode/serial/DeleteNode 4.96
241 TestMultiNode/serial/StopMultiNode 28.47
242 TestMultiNode/serial/RestartMultiNode 45.08
243 TestMultiNode/serial/ValidateNameConflict 19.02
250 TestScheduledStopUnix 94.61
253 TestInsufficientStorage 8.56
254 TestRunningBinaryUpgrade 49
256 TestKubernetesUpgrade 83.18
257 TestMissingContainerUpgrade 84.17
259 TestStoppedBinaryUpgrade/Setup 3.24
260 TestPause/serial/Start 50.24
269 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
270 TestNoKubernetes/serial/StartWithK8s 31.41
271 TestStoppedBinaryUpgrade/Upgrade 306.88
272 TestNoKubernetes/serial/StartWithStopK8s 17.74
273 TestNoKubernetes/serial/Start 4.73
274 TestPause/serial/SecondStartNoReconfiguration 6.54
278 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
280 TestNoKubernetes/serial/ProfileList 1.48
285 TestNetworkPlugins/group/false 3.99
286 TestNoKubernetes/serial/Stop 1.29
288 TestNoKubernetes/serial/StartNoArgs 7.21
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
293 TestPreload/Start-NoPreload-PullImage 57.21
295 TestStartStop/group/old-k8s-version/serial/FirstStart 51.06
296 TestPreload/Restart-With-Preload-Check-User-Image 44.28
298 TestStartStop/group/embed-certs/serial/FirstStart 37.53
299 TestStartStop/group/old-k8s-version/serial/DeployApp 10.27
301 TestStartStop/group/old-k8s-version/serial/Stop 16.04
304 TestStartStop/group/no-preload/serial/FirstStart 49.13
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
306 TestStartStop/group/old-k8s-version/serial/SecondStart 53.04
307 TestStoppedBinaryUpgrade/MinikubeLogs 1.25
309 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.23
310 TestStartStop/group/embed-certs/serial/DeployApp 9.43
312 TestStartStop/group/embed-certs/serial/Stop 17.05
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
314 TestStartStop/group/embed-certs/serial/SecondStart 44.51
315 TestStartStop/group/no-preload/serial/DeployApp 9.25
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.21
318 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/no-preload/serial/Stop 18.12
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.71
323 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
325 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
326 TestStartStop/group/no-preload/serial/SecondStart 51.34
328 TestStartStop/group/newest-cni/serial/FirstStart 23.14
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.66
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/Stop 10.06
338 TestNetworkPlugins/group/auto/Start 37.28
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
340 TestStartStop/group/newest-cni/serial/SecondStart 10.54
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
347 TestNetworkPlugins/group/kindnet/Start 38.62
348 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
349 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
350 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
354 TestNetworkPlugins/group/auto/KubeletFlags 0.29
355 TestNetworkPlugins/group/auto/NetCatPod 9.21
356 TestNetworkPlugins/group/calico/Start 47.8
357 TestNetworkPlugins/group/custom-flannel/Start 52.68
358 TestNetworkPlugins/group/auto/DNS 0.17
359 TestNetworkPlugins/group/auto/Localhost 0.1
360 TestNetworkPlugins/group/auto/HairPin 0.1
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/enable-default-cni/Start 61.27
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
364 TestNetworkPlugins/group/kindnet/NetCatPod 10.19
365 TestNetworkPlugins/group/kindnet/DNS 0.12
366 TestNetworkPlugins/group/kindnet/Localhost 0.09
367 TestNetworkPlugins/group/kindnet/HairPin 0.09
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.21
371 TestNetworkPlugins/group/calico/KubeletFlags 0.33
372 TestNetworkPlugins/group/calico/NetCatPod 8.21
373 TestNetworkPlugins/group/flannel/Start 46.91
374 TestNetworkPlugins/group/custom-flannel/DNS 0.12
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
377 TestNetworkPlugins/group/calico/DNS 0.12
378 TestNetworkPlugins/group/calico/Localhost 0.1
379 TestNetworkPlugins/group/calico/HairPin 0.09
380 TestNetworkPlugins/group/bridge/Start 57.57
381 TestPreload/PreloadSrc/gcs 11.2
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.22
384 TestPreload/PreloadSrc/github 7.32
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
388 TestPreload/PreloadSrc/gcs-cached 0.43
389 TestNetworkPlugins/group/flannel/ControllerPod 6.01
390 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
391 TestNetworkPlugins/group/flannel/NetCatPod 9.18
392 TestNetworkPlugins/group/flannel/DNS 0.11
393 TestNetworkPlugins/group/flannel/Localhost 0.08
394 TestNetworkPlugins/group/flannel/HairPin 0.08
395 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
396 TestNetworkPlugins/group/bridge/NetCatPod 9.16
397 TestNetworkPlugins/group/bridge/DNS 0.1
398 TestNetworkPlugins/group/bridge/Localhost 0.08
399 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (12.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-828881 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-828881 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.070006534s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 09:05:30.443409  377171 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1227 09:05:30.443524  377171 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-828881
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-828881: exit status 85 (70.022129ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-828881 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-828881 │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:05:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:05:18.427039  377183 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:05:18.427148  377183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:05:18.427158  377183 out.go:374] Setting ErrFile to fd 2...
	I1227 09:05:18.427163  377183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:05:18.427376  377183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	W1227 09:05:18.427526  377183 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22343-373581/.minikube/config/config.json: open /home/jenkins/minikube-integration/22343-373581/.minikube/config/config.json: no such file or directory
	I1227 09:05:18.428077  377183 out.go:368] Setting JSON to true
	I1227 09:05:18.429306  377183 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2862,"bootTime":1766823456,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:05:18.429367  377183 start.go:143] virtualization: kvm guest
	I1227 09:05:18.432664  377183 out.go:99] [download-only-828881] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1227 09:05:18.432847  377183 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball: no such file or directory
	I1227 09:05:18.432873  377183 notify.go:221] Checking for updates...
	I1227 09:05:18.433955  377183 out.go:171] MINIKUBE_LOCATION=22343
	I1227 09:05:18.435299  377183 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:05:18.436369  377183 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:05:18.437305  377183 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:05:18.438258  377183 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1227 09:05:18.440190  377183 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 09:05:18.440444  377183 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:05:18.465016  377183 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:05:18.465118  377183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:05:18.519440  377183 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-27 09:05:18.509268544 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:05:18.519535  377183 docker.go:319] overlay module found
	I1227 09:05:18.520903  377183 out.go:99] Using the docker driver based on user configuration
	I1227 09:05:18.520933  377183 start.go:309] selected driver: docker
	I1227 09:05:18.520940  377183 start.go:928] validating driver "docker" against <nil>
	I1227 09:05:18.521013  377183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:05:18.571644  377183 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-27 09:05:18.56256563 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:05:18.571863  377183 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:05:18.572408  377183 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1227 09:05:18.572568  377183 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:05:18.574232  377183 out.go:171] Using Docker driver with root privileges
	I1227 09:05:18.575179  377183 cni.go:84] Creating CNI manager for ""
	I1227 09:05:18.575264  377183 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:05:18.575278  377183 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:05:18.575353  377183 start.go:353] cluster config:
	{Name:download-only-828881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-828881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:05:18.576437  377183 out.go:99] Starting "download-only-828881" primary control-plane node in "download-only-828881" cluster
	I1227 09:05:18.576453  377183 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:05:18.577373  377183 out.go:99] Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:05:18.577407  377183 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:05:18.577548  377183 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:05:18.593497  377183 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:05:18.593681  377183 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 09:05:18.593761  377183 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:05:18.922976  377183 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:05:18.923006  377183 cache.go:65] Caching tarball of preloaded images
	I1227 09:05:18.923176  377183 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:05:18.924780  377183 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1227 09:05:18.924814  377183 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:05:18.924824  377183 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1227 09:05:19.017095  377183 preload.go:313] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1227 09:05:19.017215  377183 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:05:23.038885  377183 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a as a tarball
	
	
	* The control-plane node download-only-828881 host does not exist
	  To start a cluster, run: "minikube start -p download-only-828881"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-828881
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (9.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-917129 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-917129 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.265089034s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (9.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 09:05:40.141481  377171 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I1227 09:05:40.141532  377171 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-917129
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-917129: exit status 85 (69.698931ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-828881 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-828881 │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │ 27 Dec 25 09:05 UTC │
	│ delete  │ -p download-only-828881                                                                                                                                                   │ download-only-828881 │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │ 27 Dec 25 09:05 UTC │
	│ start   │ -o=json --download-only -p download-only-917129 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-917129 │ jenkins │ v1.37.0 │ 27 Dec 25 09:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:05:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:05:30.927504  377558 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:05:30.927607  377558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:05:30.927615  377558 out.go:374] Setting ErrFile to fd 2...
	I1227 09:05:30.927621  377558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:05:30.927836  377558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:05:30.928321  377558 out.go:368] Setting JSON to true
	I1227 09:05:30.929224  377558 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2875,"bootTime":1766823456,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:05:30.929282  377558 start.go:143] virtualization: kvm guest
	I1227 09:05:30.930878  377558 out.go:99] [download-only-917129] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:05:30.931042  377558 notify.go:221] Checking for updates...
	I1227 09:05:30.931979  377558 out.go:171] MINIKUBE_LOCATION=22343
	I1227 09:05:30.933143  377558 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:05:30.934390  377558 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:05:30.935480  377558 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:05:30.936479  377558 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1227 09:05:30.938276  377558 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 09:05:30.938517  377558 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:05:30.962559  377558 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:05:30.962646  377558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:05:31.016535  377558 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-27 09:05:31.007435645 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:05:31.016644  377558 docker.go:319] overlay module found
	I1227 09:05:31.018077  377558 out.go:99] Using the docker driver based on user configuration
	I1227 09:05:31.018108  377558 start.go:309] selected driver: docker
	I1227 09:05:31.018114  377558 start.go:928] validating driver "docker" against <nil>
	I1227 09:05:31.018202  377558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:05:31.069370  377558 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-27 09:05:31.060176831 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:05:31.069581  377558 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:05:31.070125  377558 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1227 09:05:31.070258  377558 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:05:31.071701  377558 out.go:171] Using Docker driver with root privileges
	I1227 09:05:31.072767  377558 cni.go:84] Creating CNI manager for ""
	I1227 09:05:31.072850  377558 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:05:31.072862  377558 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:05:31.072928  377558 start.go:353] cluster config:
	{Name:download-only-917129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:download-only-917129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:05:31.073988  377558 out.go:99] Starting "download-only-917129" primary control-plane node in "download-only-917129" cluster
	I1227 09:05:31.074002  377558 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:05:31.074886  377558 out.go:99] Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:05:31.074914  377558 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:05:31.075024  377558 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:05:31.091740  377558 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:05:31.091882  377558 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 09:05:31.091902  377558 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory, skipping pull
	I1227 09:05:31.091908  377558 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in cache, skipping pull
	I1227 09:05:31.091921  377558 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a as a tarball
	I1227 09:05:31.167312  377558 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:05:31.167343  377558 cache.go:65] Caching tarball of preloaded images
	I1227 09:05:31.167543  377558 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:05:31.169352  377558 out.go:99] Downloading Kubernetes v1.35.0 preload ...
	I1227 09:05:31.169376  377558 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 09:05:31.169382  377558 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1227 09:05:31.267065  377558 preload.go:313] Got checksum from GCS API "d990ae127d9fea8335098a73dac8ef3a"
	I1227 09:05:31.267111  377558 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:d990ae127d9fea8335098a73dac8ef3a -> /home/jenkins/minikube-integration/22343-373581/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-917129 host does not exist
	  To start a cluster, run: "minikube start -p download-only-917129"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-917129
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-804666 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-804666" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-804666
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 09:05:41.236602  377171 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-086395 --alsologtostderr --binary-mirror http://127.0.0.1:37011 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-086395" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-086395
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (51.52s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-165603 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-165603 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (48.932467907s)
helpers_test.go:176: Cleaning up "offline-crio-165603" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-165603
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-165603: (2.587933819s)
--- PASS: TestOffline (51.52s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-102660
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-102660: exit status 85 (61.843453ms)

                                                
                                                
-- stdout --
	* Profile "addons-102660" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-102660"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-102660
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-102660: exit status 85 (61.09476ms)

                                                
                                                
-- stdout --
	* Profile "addons-102660" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-102660"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (100.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-102660 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-102660 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m40.38388773s)
--- PASS: TestAddons/Setup (100.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-102660 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-102660 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-102660 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-102660 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [838dc247-ecef-415f-b13c-bef23dd579e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [838dc247-ecef-415f-b13c-bef23dd579e8] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00317301s
addons_test.go:696: (dbg) Run:  kubectl --context addons-102660 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-102660 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-102660 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-102660
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-102660: (18.181430121s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-102660
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-102660
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-102660
--- PASS: TestAddons/StoppedEnableDisable (18.45s)

                                                
                                    
x
+
TestCertOptions (24.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-318270 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-318270 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.607822496s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-318270 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-318270 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-318270 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-318270" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-318270
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-318270: (2.385990085s)
--- PASS: TestCertOptions (24.63s)

                                                
                                    
x
+
TestCertExpiration (209.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-237269 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-237269 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (21.794426627s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-237269 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-237269 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.155250654s)
helpers_test.go:176: Cleaning up "cert-expiration-237269" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-237269
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-237269: (2.379552526s)
--- PASS: TestCertExpiration (209.33s)

                                                
                                    
x
+
TestForceSystemdFlag (27.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-868742 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-868742 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.265108633s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-868742 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-868742" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-868742
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-868742: (2.794930451s)
--- PASS: TestForceSystemdFlag (27.43s)

                                                
                                    
x
+
TestForceSystemdEnv (22.29s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-036559 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-036559 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (19.668875018s)
helpers_test.go:176: Cleaning up "force-systemd-env-036559" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-036559
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-036559: (2.617117601s)
--- PASS: TestForceSystemdEnv (22.29s)

                                                
                                    
x
+
TestErrorSpam/setup (15.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-854058 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-854058 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-854058 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-854058 --driver=docker  --container-runtime=crio: (15.430819602s)
--- PASS: TestErrorSpam/setup (15.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (5.39s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 pause: exit status 80 (2.070010166s)

                                                
                                                
-- stdout --
	* Pausing node nospam-854058 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:09:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 pause: exit status 80 (1.701131144s)

                                                
                                                
-- stdout --
	* Pausing node nospam-854058 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:09:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 pause: exit status 80 (1.61940558s)

                                                
                                                
-- stdout --
	* Pausing node nospam-854058 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:09:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.39s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 unpause: exit status 80 (1.845794967s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-854058 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:09:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 unpause: exit status 80 (1.731745693s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-854058 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:09:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 unpause: exit status 80 (1.985238486s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-854058 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:09:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.56s)

                                                
                                    
x
+
TestErrorSpam/stop (8.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 stop: (7.878310357s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-854058 --log_dir /tmp/nospam-854058 stop
--- PASS: TestErrorSpam/stop (8.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22343-373581/.minikube/files/etc/test/nested/copy/377171/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583037 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-583037 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.392240366s)
--- PASS: TestFunctional/serial/StartWithProxy (39.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 09:10:05.520725  377171 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583037 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-583037 --alsologtostderr -v=8: (5.934913261s)
functional_test.go:678: soft start took 5.935717444s for "functional-583037" cluster.
I1227 09:10:11.456126  377171 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (5.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-583037 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-583037 /tmp/TestFunctionalserialCacheCmdcacheadd_local1700668688/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 cache add minikube-local-cache-test:functional-583037
functional_test.go:1109: (dbg) Done: out/minikube-linux-amd64 -p functional-583037 cache add minikube-local-cache-test:functional-583037: (1.588374201s)
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 cache delete minikube-local-cache-test:functional-583037
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-583037
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583037 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (268.504196ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 kubectl -- --context functional-583037 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-583037 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583037 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-583037 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.16602626s)
functional_test.go:776: restart took 52.16613639s for "functional-583037" cluster.
I1227 09:11:10.664050  377171 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (52.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-583037 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-amd64 -p functional-583037 logs: (1.121737443s)
--- PASS: TestFunctional/serial/LogsCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 logs --file /tmp/TestFunctionalserialLogsFileCmd2455898062/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-583037 logs --file /tmp/TestFunctionalserialLogsFileCmd2455898062/001/logs.txt: (1.152499864s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-583037 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-583037
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-583037: exit status 115 (344.017334ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32377 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-583037 delete -f testdata/invalidsvc.yaml
functional_test.go:2337: (dbg) Done: kubectl --context functional-583037 delete -f testdata/invalidsvc.yaml: (1.974972056s)
--- PASS: TestFunctional/serial/InvalidService (5.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583037 config get cpus: exit status 14 (88.618698ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583037 config get cpus: exit status 14 (81.732566ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-583037 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-583037 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 412168: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583037 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-583037 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (181.670706ms)

                                                
                                                
-- stdout --
	* [functional-583037] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:11:31.599747  411605 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:11:31.600083  411605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:11:31.600096  411605 out.go:374] Setting ErrFile to fd 2...
	I1227 09:11:31.600103  411605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:11:31.600396  411605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:11:31.601010  411605 out.go:368] Setting JSON to false
	I1227 09:11:31.602360  411605 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3236,"bootTime":1766823456,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:11:31.602432  411605 start.go:143] virtualization: kvm guest
	I1227 09:11:31.604369  411605 out.go:179] * [functional-583037] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:11:31.605806  411605 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:11:31.605826  411605 notify.go:221] Checking for updates...
	I1227 09:11:31.608093  411605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:11:31.609117  411605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:11:31.610099  411605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:11:31.611544  411605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:11:31.612555  411605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:11:31.613954  411605 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:11:31.614465  411605 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:11:31.645133  411605 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:11:31.645273  411605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:11:31.707725  411605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-27 09:11:31.697316098 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:11:31.707906  411605 docker.go:319] overlay module found
	I1227 09:11:31.710110  411605 out.go:179] * Using the docker driver based on existing profile
	I1227 09:11:31.711287  411605 start.go:309] selected driver: docker
	I1227 09:11:31.711302  411605 start.go:928] validating driver "docker" against &{Name:functional-583037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-583037 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:11:31.711437  411605 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:11:31.713159  411605 out.go:203] 
	W1227 09:11:31.714261  411605 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 09:11:31.715296  411605 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583037 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583037 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-583037 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (175.745981ms)

                                                
                                                
-- stdout --
	* [functional-583037] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:11:39.886829  412503 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:11:39.886944  412503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:11:39.886962  412503 out.go:374] Setting ErrFile to fd 2...
	I1227 09:11:39.886966  412503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:11:39.887268  412503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:11:39.887761  412503 out.go:368] Setting JSON to false
	I1227 09:11:39.888820  412503 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3244,"bootTime":1766823456,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:11:39.888888  412503 start.go:143] virtualization: kvm guest
	I1227 09:11:39.890739  412503 out.go:179] * [functional-583037] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1227 09:11:39.894781  412503 notify.go:221] Checking for updates...
	I1227 09:11:39.894821  412503 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:11:39.897857  412503 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:11:39.898957  412503 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:11:39.900129  412503 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:11:39.901286  412503 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:11:39.902468  412503 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:11:39.904859  412503 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:11:39.905658  412503 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:11:39.932735  412503 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:11:39.932872  412503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:11:39.992354  412503 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-27 09:11:39.981397792 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:11:39.992511  412503 docker.go:319] overlay module found
	I1227 09:11:39.994256  412503 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1227 09:11:39.995510  412503 start.go:309] selected driver: docker
	I1227 09:11:39.995535  412503 start.go:928] validating driver "docker" against &{Name:functional-583037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-583037 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:11:39.995661  412503 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:11:39.997663  412503 out.go:203] 
	W1227 09:11:39.998864  412503 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 09:11:39.999985  412503 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-583037 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-583037 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-rtw5q" [58558b67-dab9-4af6-871e-73bcc17c98c7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-rtw5q" [58558b67-dab9-4af6-871e-73bcc17c98c7] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003377614s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30827
functional_test.go:1685: http://192.168.49.2:30827: success! body:
Request served by hello-node-connect-5d95464fd4-rtw5q

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30827
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [f9145662-7bd4-454b-b420-36d2ee2b4138] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003992978s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-583037 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-583037 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-583037 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-583037 apply -f testdata/storage-provisioner/pod.yaml
I1227 09:11:24.472035  377171 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [b9d0cc74-e665-43a2-8639-8243ce41d15b] Pending
helpers_test.go:353: "sp-pod" [b9d0cc74-e665-43a2-8639-8243ce41d15b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [b9d0cc74-e665-43a2-8639-8243ce41d15b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004247446s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-583037 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-583037 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-583037 apply -f testdata/storage-provisioner/pod.yaml
I1227 09:11:32.743271  377171 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8d39d3f8-bcf9-4fc1-b2bb-349218815048] Pending
helpers_test.go:353: "sp-pod" [8d39d3f8-bcf9-4fc1-b2bb-349218815048] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003477487s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-583037 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh -n functional-583037 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 cp functional-583037:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd408898521/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh -n functional-583037 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh -n functional-583037 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-583037 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-tjsfg" [08470e99-b784-48b7-b710-f7ffb63eede4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-tjsfg" [08470e99-b784-48b7-b710-f7ffb63eede4] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.00686244s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-583037 exec mysql-7d7b65bc95-tjsfg -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-583037 exec mysql-7d7b65bc95-tjsfg -- mysql -ppassword -e "show databases;": exit status 1 (148.334702ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-583037 exec mysql-7d7b65bc95-tjsfg -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-583037 exec mysql-7d7b65bc95-tjsfg -- mysql -ppassword -e "show databases;": exit status 1 (106.619771ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-583037 exec mysql-7d7b65bc95-tjsfg -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-583037 exec mysql-7d7b65bc95-tjsfg -- mysql -ppassword -e "show databases;": exit status 1 (138.662228ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-583037 exec mysql-7d7b65bc95-tjsfg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.92s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/377171/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "sudo cat /etc/test/nested/copy/377171/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/377171.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "sudo cat /etc/ssl/certs/377171.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/377171.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "sudo cat /usr/share/ca-certificates/377171.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3771712.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "sudo cat /etc/ssl/certs/3771712.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/3771712.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "sudo cat /usr/share/ca-certificates/3771712.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-583037 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583037 ssh "sudo systemctl is-active docker": exit status 1 (313.016509ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583037 ssh "sudo systemctl is-active containerd": exit status 1 (277.736552ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-583037 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-583037 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-lq77c" [56639642-b649-4858-91f0-b54ec061df13] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-lq77c" [56639642-b649-4858-91f0-b54ec061df13] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003789544s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-583037 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-583037 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-583037 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-583037 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 407204: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-583037 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-583037 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [79041da2-44e3-464f-b50f-7ff05a90ea0f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [79041da2-44e3-464f-b50f-7ff05a90ea0f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003034233s
I1227 09:11:28.598203  377171 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 service list -o json
functional_test.go:1509: Took "493.844915ms" to run "out/minikube-linux-amd64 -p functional-583037 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:32293
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:32293
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-583037 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.244.219 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-583037 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-583037 /tmp/TestFunctionalparallelMountCmdany-port2194647802/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766826688791378876" to /tmp/TestFunctionalparallelMountCmdany-port2194647802/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766826688791378876" to /tmp/TestFunctionalparallelMountCmdany-port2194647802/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766826688791378876" to /tmp/TestFunctionalparallelMountCmdany-port2194647802/001/test-1766826688791378876
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583037 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (315.089515ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 09:11:29.106875  377171 retry.go:84] will retry after 700ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 27 09:11 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 27 09:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 27 09:11 test-1766826688791378876
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh cat /mount-9p/test-1766826688791378876
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-583037 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [992b6f46-c269-4e99-980b-6c000605f428] Pending
helpers_test.go:353: "busybox-mount" [992b6f46-c269-4e99-980b-6c000605f428] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [992b6f46-c269-4e99-980b-6c000605f428] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [992b6f46-c269-4e99-980b-6c000605f428] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.004211657s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-583037 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583037 /tmp/TestFunctionalparallelMountCmdany-port2194647802/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "355.421819ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "69.916573ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "349.594891ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "78.83271ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-583037 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-583037
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-583037 image ls --format short --alsologtostderr:
I1227 09:11:48.422538  415730 out.go:360] Setting OutFile to fd 1 ...
I1227 09:11:48.422661  415730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:11:48.422673  415730 out.go:374] Setting ErrFile to fd 2...
I1227 09:11:48.422677  415730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:11:48.422902  415730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
I1227 09:11:48.423506  415730 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:11:48.423612  415730 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:11:48.424088  415730 cli_runner.go:164] Run: docker container inspect functional-583037 --format={{.State.Status}}
I1227 09:11:48.444903  415730 ssh_runner.go:195] Run: systemctl --version
I1227 09:11:48.444955  415730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-583037
I1227 09:11:48.464834  415730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/functional-583037/id_rsa Username:docker}
I1227 09:11:48.555011  415730 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-583037 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-583037                     │ 9056ab77afb8e │ 4.95MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ 9056ab77afb8e │ 4.95MB │
│ localhost/minikube-local-cache-test               │ functional-583037                     │ 42884880efa40 │ 3.33kB │
│ public.ecr.aws/docker/library/mysql               │ 8.4                                   │ 5e3dcc4ab5604 │ 804MB  │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ 550794e3b12ac │ 52.8MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                             │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                             │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 2c9a4b058bd7e │ 76.9MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ 32652ff1bbe6b │ 72MB   │
│ registry.k8s.io/pause                             │ latest                                │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 04da2b0513cd7 │ 55.2MB │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ 5c6acd67e9cd1 │ 90.8MB │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-583037 image ls --format table --alsologtostderr:
I1227 09:11:48.891854  415999 out.go:360] Setting OutFile to fd 1 ...
I1227 09:11:48.891964  415999 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:11:48.891975  415999 out.go:374] Setting ErrFile to fd 2...
I1227 09:11:48.891981  415999 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:11:48.892158  415999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
I1227 09:11:48.892740  415999 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:11:48.892868  415999 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:11:48.893310  415999 cli_runner.go:164] Run: docker container inspect functional-583037 --format={{.State.Status}}
I1227 09:11:48.912953  415999 ssh_runner.go:195] Run: systemctl --version
I1227 09:11:48.913021  415999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-583037
I1227 09:11:48.934816  415999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/functional-583037/id_rsa Username:docker}
I1227 09:11:49.030179  415999 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-583037 image ls --format json --alsologtostderr:
[{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130","public.ecr.aws/docker/library/m
ysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803760263"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111","registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"76893520"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":["registry.k8s.io/kube-pr
oxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"71986585"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6
ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"52763986"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca95
18083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807
a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"90844140"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-ser
ver@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4945146"},{"id":"42884880efa40dc8a28fff9ecc3bdd35d0a9619da084e3910a723364ca192ba7","repoDigests":["localhost/minikube-local-cache-test@sha256:0019e27f7ad1eda2e2c34142fa1dc459f6d2d1ec0523bb1506ceccea5eea7b19"],"repoTags":["localhost/minikube-local-cache-test:functional-583037"],"size":"3330"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-583037 image ls --format json --alsologtostderr:
I1227 09:11:48.659939  415847 out.go:360] Setting OutFile to fd 1 ...
I1227 09:11:48.660048  415847 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:11:48.660058  415847 out.go:374] Setting ErrFile to fd 2...
I1227 09:11:48.660064  415847 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:11:48.660263  415847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
I1227 09:11:48.660903  415847 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:11:48.661028  415847 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:11:48.661522  415847 cli_runner.go:164] Run: docker container inspect functional-583037 --format={{.State.Status}}
I1227 09:11:48.680250  415847 ssh_runner.go:195] Run: systemctl --version
I1227 09:11:48.680307  415847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-583037
I1227 09:11:48.701415  415847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/functional-583037/id_rsa Username:docker}
I1227 09:11:48.799073  415847 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-583037 image ls --format yaml --alsologtostderr:
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4945146"
- id: 5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130
- public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803760263"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "52763986"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "90844140"
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "71986585"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
- registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "76893520"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 42884880efa40dc8a28fff9ecc3bdd35d0a9619da084e3910a723364ca192ba7
repoDigests:
- localhost/minikube-local-cache-test@sha256:0019e27f7ad1eda2e2c34142fa1dc459f6d2d1ec0523bb1506ceccea5eea7b19
repoTags:
- localhost/minikube-local-cache-test:functional-583037
size: "3330"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-583037 image ls --format yaml --alsologtostderr:
I1227 09:11:48.433093  415736 out.go:360] Setting OutFile to fd 1 ...
I1227 09:11:48.433212  415736 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:11:48.433222  415736 out.go:374] Setting ErrFile to fd 2...
I1227 09:11:48.433226  415736 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:11:48.433533  415736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
I1227 09:11:48.434292  415736 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:11:48.434403  415736 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:11:48.434916  415736 cli_runner.go:164] Run: docker container inspect functional-583037 --format={{.State.Status}}
I1227 09:11:48.456074  415736 ssh_runner.go:195] Run: systemctl --version
I1227 09:11:48.456137  415736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-583037
I1227 09:11:48.474439  415736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/functional-583037/id_rsa Username:docker}
I1227 09:11:48.563683  415736 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583037 ssh pgrep buildkitd: exit status 1 (278.598342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image build -t localhost/my-image:functional-583037 testdata/build --alsologtostderr
2025/12/27 09:11:48 [DEBUG] GET http://127.0.0.1:36325/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-583037 image build -t localhost/my-image:functional-583037 testdata/build --alsologtostderr: (2.814679301s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-583037 image build -t localhost/my-image:functional-583037 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 751aa09f665
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-583037
--> 3e95728f077
Successfully tagged localhost/my-image:functional-583037
3e95728f0778ada17663cbb4bfcfdb00877cd9d4bf2558b3daddd5bb6da47195
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-583037 image build -t localhost/my-image:functional-583037 testdata/build --alsologtostderr:
I1227 09:11:48.930197  416011 out.go:360] Setting OutFile to fd 1 ...
I1227 09:11:48.930442  416011 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:11:48.930453  416011 out.go:374] Setting ErrFile to fd 2...
I1227 09:11:48.930458  416011 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:11:48.930655  416011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
I1227 09:11:48.931253  416011 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:11:48.931946  416011 config.go:182] Loaded profile config "functional-583037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:11:48.932367  416011 cli_runner.go:164] Run: docker container inspect functional-583037 --format={{.State.Status}}
I1227 09:11:48.950675  416011 ssh_runner.go:195] Run: systemctl --version
I1227 09:11:48.950744  416011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-583037
I1227 09:11:48.970034  416011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/functional-583037/id_rsa Username:docker}
I1227 09:11:49.062525  416011 build_images.go:162] Building image from path: /tmp/build.2581227829.tar
I1227 09:11:49.062616  416011 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 09:11:49.070911  416011 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2581227829.tar
I1227 09:11:49.074474  416011 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2581227829.tar: stat -c "%s %y" /var/lib/minikube/build/build.2581227829.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2581227829.tar': No such file or directory
I1227 09:11:49.074507  416011 ssh_runner.go:362] scp /tmp/build.2581227829.tar --> /var/lib/minikube/build/build.2581227829.tar (3072 bytes)
I1227 09:11:49.091500  416011 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2581227829
I1227 09:11:49.098831  416011 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2581227829 -xf /var/lib/minikube/build/build.2581227829.tar
I1227 09:11:49.106723  416011 crio.go:315] Building image: /var/lib/minikube/build/build.2581227829
I1227 09:11:49.106780  416011 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-583037 /var/lib/minikube/build/build.2581227829 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1227 09:11:51.664923  416011 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-583037 /var/lib/minikube/build/build.2581227829 --cgroup-manager=cgroupfs: (2.558082501s)
I1227 09:11:51.664991  416011 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2581227829
I1227 09:11:51.672882  416011 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2581227829.tar
I1227 09:11:51.679890  416011 build_images.go:218] Built localhost/my-image:functional-583037 from /tmp/build.2581227829.tar
I1227 09:11:51.679924  416011 build_images.go:134] succeeded building to: functional-583037
I1227 09:11:51.679931  416011 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-583037 /tmp/TestFunctionalparallelMountCmdspecific-port214204957/001:/mount-9p --alsologtostderr -v=1 --port 42641]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583037 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.413813ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 09:11:44.217742  377171 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583037 /tmp/TestFunctionalparallelMountCmdspecific-port214204957/001:/mount-9p --alsologtostderr -v=1 --port 42641] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583037 ssh "sudo umount -f /mount-9p": exit status 1 (322.70614ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-583037 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583037 /tmp/TestFunctionalparallelMountCmdspecific-port214204957/001:/mount-9p --alsologtostderr -v=1 --port 42641] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-583037 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.791985142s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-583037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4041341973/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-583037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4041341973/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-583037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4041341973/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583037 ssh "findmnt -T" /mount1: exit status 1 (404.547893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-583037 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4041341973/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4041341973/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583037 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4041341973/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-583037 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-583037
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-583037
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-583037
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (131.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1227 09:12:23.536369  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:23.541650  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:23.551965  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:23.572258  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:23.612575  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:23.692919  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:23.853644  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:24.174149  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:24.814833  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:26.095778  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:28.657093  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:33.777328  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:12:44.017569  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:13:04.498085  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:13:45.458813  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-675014 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m10.672488674s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (131.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-675014 kubectl -- rollout status deployment/busybox: (5.010982411s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-mzrh4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-rh2tm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-xpc4n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-mzrh4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-rh2tm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-xpc4n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-mzrh4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-rh2tm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-xpc4n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-mzrh4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-mzrh4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-rh2tm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-rh2tm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-xpc4n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 kubectl -- exec busybox-769dd8b7dd-xpc4n -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-675014 node add --alsologtostderr -v 5: (25.890481086s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-675014 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp testdata/cp-test.txt ha-675014:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile112426746/001/cp-test_ha-675014.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014:/home/docker/cp-test.txt ha-675014-m02:/home/docker/cp-test_ha-675014_ha-675014-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m02 "sudo cat /home/docker/cp-test_ha-675014_ha-675014-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014:/home/docker/cp-test.txt ha-675014-m03:/home/docker/cp-test_ha-675014_ha-675014-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m03 "sudo cat /home/docker/cp-test_ha-675014_ha-675014-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014:/home/docker/cp-test.txt ha-675014-m04:/home/docker/cp-test_ha-675014_ha-675014-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m04 "sudo cat /home/docker/cp-test_ha-675014_ha-675014-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp testdata/cp-test.txt ha-675014-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile112426746/001/cp-test_ha-675014-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m02:/home/docker/cp-test.txt ha-675014:/home/docker/cp-test_ha-675014-m02_ha-675014.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014 "sudo cat /home/docker/cp-test_ha-675014-m02_ha-675014.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m02:/home/docker/cp-test.txt ha-675014-m03:/home/docker/cp-test_ha-675014-m02_ha-675014-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m03 "sudo cat /home/docker/cp-test_ha-675014-m02_ha-675014-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m02:/home/docker/cp-test.txt ha-675014-m04:/home/docker/cp-test_ha-675014-m02_ha-675014-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m04 "sudo cat /home/docker/cp-test_ha-675014-m02_ha-675014-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp testdata/cp-test.txt ha-675014-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile112426746/001/cp-test_ha-675014-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m03:/home/docker/cp-test.txt ha-675014:/home/docker/cp-test_ha-675014-m03_ha-675014.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014 "sudo cat /home/docker/cp-test_ha-675014-m03_ha-675014.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m03:/home/docker/cp-test.txt ha-675014-m02:/home/docker/cp-test_ha-675014-m03_ha-675014-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m02 "sudo cat /home/docker/cp-test_ha-675014-m03_ha-675014-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m03:/home/docker/cp-test.txt ha-675014-m04:/home/docker/cp-test_ha-675014-m03_ha-675014-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m04 "sudo cat /home/docker/cp-test_ha-675014-m03_ha-675014-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp testdata/cp-test.txt ha-675014-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile112426746/001/cp-test_ha-675014-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m04:/home/docker/cp-test.txt ha-675014:/home/docker/cp-test_ha-675014-m04_ha-675014.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014 "sudo cat /home/docker/cp-test_ha-675014-m04_ha-675014.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m04:/home/docker/cp-test.txt ha-675014-m02:/home/docker/cp-test_ha-675014-m04_ha-675014-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m02 "sudo cat /home/docker/cp-test_ha-675014-m04_ha-675014-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 cp ha-675014-m04:/home/docker/cp-test.txt ha-675014-m03:/home/docker/cp-test_ha-675014-m04_ha-675014-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 ssh -n ha-675014-m03 "sudo cat /home/docker/cp-test_ha-675014-m04_ha-675014-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 node stop m02 --alsologtostderr -v 5
E1227 09:15:07.380596  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-675014 node stop m02 --alsologtostderr -v 5: (13.54733093s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-675014 status --alsologtostderr -v 5: exit status 7 (689.835327ms)

                                                
                                                
-- stdout --
	ha-675014
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-675014-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-675014-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-675014-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:15:12.095568  436327 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:15:12.095933  436327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:12.095944  436327 out.go:374] Setting ErrFile to fd 2...
	I1227 09:15:12.095949  436327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:12.096202  436327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:15:12.096398  436327 out.go:368] Setting JSON to false
	I1227 09:15:12.096426  436327 mustload.go:66] Loading cluster: ha-675014
	I1227 09:15:12.096503  436327 notify.go:221] Checking for updates...
	I1227 09:15:12.096826  436327 config.go:182] Loaded profile config "ha-675014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:12.096847  436327 status.go:174] checking status of ha-675014 ...
	I1227 09:15:12.097296  436327 cli_runner.go:164] Run: docker container inspect ha-675014 --format={{.State.Status}}
	I1227 09:15:12.116188  436327 status.go:371] ha-675014 host status = "Running" (err=<nil>)
	I1227 09:15:12.116210  436327 host.go:66] Checking if "ha-675014" exists ...
	I1227 09:15:12.116477  436327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-675014
	I1227 09:15:12.135761  436327 host.go:66] Checking if "ha-675014" exists ...
	I1227 09:15:12.136060  436327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:15:12.136107  436327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-675014
	I1227 09:15:12.155219  436327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/ha-675014/id_rsa Username:docker}
	I1227 09:15:12.243753  436327 ssh_runner.go:195] Run: systemctl --version
	I1227 09:15:12.250567  436327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:15:12.263851  436327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:15:12.320211  436327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-27 09:15:12.310083092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:15:12.320819  436327 kubeconfig.go:125] found "ha-675014" server: "https://192.168.49.254:8443"
	I1227 09:15:12.320867  436327 api_server.go:166] Checking apiserver status ...
	I1227 09:15:12.320920  436327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:15:12.333975  436327 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup
	I1227 09:15:12.342274  436327 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1246/cgroup
	I1227 09:15:12.349577  436327 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-57a510e7697a8731a0a9e49b615c47d1415e01b57fa9be7f3e98950f5a2eed5a.scope/container/cgroup.freeze
	I1227 09:15:12.356881  436327 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 09:15:12.361145  436327 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 09:15:12.361169  436327 status.go:463] ha-675014 apiserver status = Running (err=<nil>)
	I1227 09:15:12.361181  436327 status.go:176] ha-675014 status: &{Name:ha-675014 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:15:12.361201  436327 status.go:174] checking status of ha-675014-m02 ...
	I1227 09:15:12.361438  436327 cli_runner.go:164] Run: docker container inspect ha-675014-m02 --format={{.State.Status}}
	I1227 09:15:12.379859  436327 status.go:371] ha-675014-m02 host status = "Stopped" (err=<nil>)
	I1227 09:15:12.379879  436327 status.go:384] host is not running, skipping remaining checks
	I1227 09:15:12.379885  436327 status.go:176] ha-675014-m02 status: &{Name:ha-675014-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:15:12.379904  436327 status.go:174] checking status of ha-675014-m03 ...
	I1227 09:15:12.380197  436327 cli_runner.go:164] Run: docker container inspect ha-675014-m03 --format={{.State.Status}}
	I1227 09:15:12.398219  436327 status.go:371] ha-675014-m03 host status = "Running" (err=<nil>)
	I1227 09:15:12.398241  436327 host.go:66] Checking if "ha-675014-m03" exists ...
	I1227 09:15:12.398500  436327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-675014-m03
	I1227 09:15:12.415329  436327 host.go:66] Checking if "ha-675014-m03" exists ...
	I1227 09:15:12.415631  436327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:15:12.415690  436327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-675014-m03
	I1227 09:15:12.433634  436327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/ha-675014-m03/id_rsa Username:docker}
	I1227 09:15:12.520899  436327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:15:12.533470  436327 kubeconfig.go:125] found "ha-675014" server: "https://192.168.49.254:8443"
	I1227 09:15:12.533501  436327 api_server.go:166] Checking apiserver status ...
	I1227 09:15:12.533544  436327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:15:12.544485  436327 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	I1227 09:15:12.552446  436327 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1169/cgroup
	I1227 09:15:12.559585  436327 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-27689500ed713362676b112602ed0fde79b04ca9258a909138cf8f561490d349.scope/container/cgroup.freeze
	I1227 09:15:12.566906  436327 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 09:15:12.570947  436327 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 09:15:12.570971  436327 status.go:463] ha-675014-m03 apiserver status = Running (err=<nil>)
	I1227 09:15:12.570981  436327 status.go:176] ha-675014-m03 status: &{Name:ha-675014-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:15:12.571000  436327 status.go:174] checking status of ha-675014-m04 ...
	I1227 09:15:12.571251  436327 cli_runner.go:164] Run: docker container inspect ha-675014-m04 --format={{.State.Status}}
	I1227 09:15:12.588776  436327 status.go:371] ha-675014-m04 host status = "Running" (err=<nil>)
	I1227 09:15:12.588806  436327 host.go:66] Checking if "ha-675014-m04" exists ...
	I1227 09:15:12.589085  436327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-675014-m04
	I1227 09:15:12.606138  436327 host.go:66] Checking if "ha-675014-m04" exists ...
	I1227 09:15:12.606437  436327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:15:12.606499  436327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-675014-m04
	I1227 09:15:12.623345  436327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/ha-675014-m04/id_rsa Username:docker}
	I1227 09:15:12.710547  436327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:15:12.722324  436327 status.go:176] ha-675014-m04 status: &{Name:ha-675014-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-675014 node start m02 --alsologtostderr -v 5: (7.467630103s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-675014 stop --alsologtostderr -v 5: (54.541814182s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 start --wait true --alsologtostderr -v 5
E1227 09:16:18.670097  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:18.675362  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:18.685620  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:18.705872  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:18.746125  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:18.826435  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:18.986855  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:19.307432  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:19.948453  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:21.228735  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:23.789221  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:28.909983  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:39.150531  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:16:59.630876  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-675014 start --wait true --alsologtostderr -v 5: (57.598435447s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 node delete m03 --alsologtostderr -v 5
E1227 09:17:23.536786  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-675014 node delete m03 --alsologtostderr -v 5: (9.689054909s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 stop --alsologtostderr -v 5
E1227 09:17:40.591529  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:17:51.221947  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-675014 stop --alsologtostderr -v 5: (41.594135932s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-675014 status --alsologtostderr -v 5: exit status 7 (113.103872ms)

                                                
                                                
-- stdout --
	ha-675014
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-675014-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-675014-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:18:07.828939  451098 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:18:07.829207  451098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:18:07.829217  451098 out.go:374] Setting ErrFile to fd 2...
	I1227 09:18:07.829222  451098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:18:07.829403  451098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:18:07.829586  451098 out.go:368] Setting JSON to false
	I1227 09:18:07.829610  451098 mustload.go:66] Loading cluster: ha-675014
	I1227 09:18:07.829718  451098 notify.go:221] Checking for updates...
	I1227 09:18:07.830026  451098 config.go:182] Loaded profile config "ha-675014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:18:07.830052  451098 status.go:174] checking status of ha-675014 ...
	I1227 09:18:07.830580  451098 cli_runner.go:164] Run: docker container inspect ha-675014 --format={{.State.Status}}
	I1227 09:18:07.848685  451098 status.go:371] ha-675014 host status = "Stopped" (err=<nil>)
	I1227 09:18:07.848711  451098 status.go:384] host is not running, skipping remaining checks
	I1227 09:18:07.848717  451098 status.go:176] ha-675014 status: &{Name:ha-675014 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:18:07.848744  451098 status.go:174] checking status of ha-675014-m02 ...
	I1227 09:18:07.849008  451098 cli_runner.go:164] Run: docker container inspect ha-675014-m02 --format={{.State.Status}}
	I1227 09:18:07.866494  451098 status.go:371] ha-675014-m02 host status = "Stopped" (err=<nil>)
	I1227 09:18:07.866515  451098 status.go:384] host is not running, skipping remaining checks
	I1227 09:18:07.866523  451098 status.go:176] ha-675014-m02 status: &{Name:ha-675014-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:18:07.866543  451098 status.go:174] checking status of ha-675014-m04 ...
	I1227 09:18:07.866850  451098 cli_runner.go:164] Run: docker container inspect ha-675014-m04 --format={{.State.Status}}
	I1227 09:18:07.883206  451098 status.go:371] ha-675014-m04 host status = "Stopped" (err=<nil>)
	I1227 09:18:07.883224  451098 status.go:384] host is not running, skipping remaining checks
	I1227 09:18:07.883230  451098 status.go:176] ha-675014-m04 status: &{Name:ha-675014-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (49.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-675014 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (49.038366357s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (49.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 node add --control-plane --alsologtostderr -v 5
E1227 09:19:02.512578  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-675014 node add --control-plane --alsologtostderr -v 5: (1m11.226211829s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-675014 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-143482 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-143482 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (39.640429851s)
--- PASS: TestJSONOutput/start/Command (39.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-143482 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-143482 --output=json --user=testUser: (7.935497014s)
--- PASS: TestJSONOutput/stop/Command (7.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-486142 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-486142 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.732848ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8641686d-cab4-4d1c-b0b4-2740061c125c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-486142] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8114d3b6-2ba0-462f-b012-ab5d1b4f6c10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22343"}}
	{"specversion":"1.0","id":"a98c6bf1-0607-4e75-8dfc-82fced38f41c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8464150f-9b23-40d1-8281-53b1cf354c77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig"}}
	{"specversion":"1.0","id":"a3e33d46-f468-41ff-81e4-7c51b66d497a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube"}}
	{"specversion":"1.0","id":"ef055f0b-a9ae-4092-9666-e7ac344ac2b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e4acceac-f717-40fa-a72b-82783fc35015","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"af1fc289-527c-463a-8854-aed4b0407967","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-486142" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-486142
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-308417 --network=
E1227 09:21:18.671522  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-308417 --network=: (26.232645984s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-308417" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-308417
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-308417: (2.078674352s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (18.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-018369 --network=bridge
E1227 09:21:46.353957  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-018369 --network=bridge: (16.381666696s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-018369" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-018369
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-018369: (1.930825297s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (18.33s)

                                                
                                    
x
+
TestKicExistingNetwork (19.56s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1227 09:22:02.441576  377171 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 09:22:02.458144  377171 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 09:22:02.458229  377171 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1227 09:22:02.458268  377171 cli_runner.go:164] Run: docker network inspect existing-network
W1227 09:22:02.474150  377171 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1227 09:22:02.474187  377171 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1227 09:22:02.474224  377171 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1227 09:22:02.474407  377171 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:22:02.491125  377171 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c57ecff6d5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:54:d8:ad:bd:73} reservation:<nil>}
I1227 09:22:02.491560  377171 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a7d6a0}
I1227 09:22:02.491595  377171 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1227 09:22:02.491647  377171 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1227 09:22:02.536603  377171 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-628818 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-628818 --network=existing-network: (17.475397043s)
helpers_test.go:176: Cleaning up "existing-network-628818" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-628818
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-628818: (1.960368986s)
I1227 09:22:21.989543  377171 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (19.56s)

                                                
                                    
x
+
TestKicCustomSubnet (22.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-343641 --subnet=192.168.60.0/24
E1227 09:22:23.536909  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-343641 --subnet=192.168.60.0/24: (20.630434648s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-343641 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-343641" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-343641
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-343641: (2.074743539s)
--- PASS: TestKicCustomSubnet (22.72s)

                                                
                                    
x
+
TestKicStaticIP (23.23s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-827854 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-827854 --static-ip=192.168.200.200: (20.96766926s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-827854 ip
helpers_test.go:176: Cleaning up "static-ip-827854" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-827854
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-827854: (2.110820065s)
--- PASS: TestKicStaticIP (23.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (41.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-311686 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-311686 --driver=docker  --container-runtime=crio: (15.659897636s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-314555 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-314555 --driver=docker  --container-runtime=crio: (19.533542811s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-311686
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-314555
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-314555" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-314555
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-314555: (2.299798341s)
helpers_test.go:176: Cleaning up "first-311686" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-311686
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-311686: (2.306831425s)
--- PASS: TestMinikubeProfile (41.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-103748 --memory=3072 --mount-string /tmp/TestMountStartserial4251235301/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-103748 --memory=3072 --mount-string /tmp/TestMountStartserial4251235301/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.451675221s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-103748 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-122386 --memory=3072 --mount-string /tmp/TestMountStartserial4251235301/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-122386 --memory=3072 --mount-string /tmp/TestMountStartserial4251235301/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.698326897s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122386 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-103748 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-103748 --alsologtostderr -v=5: (1.666755472s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122386 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-122386
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-122386: (1.252553698s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-122386
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-122386: (6.742165036s)
--- PASS: TestMountStart/serial/RestartStopped (7.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122386 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (61.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-748578 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-748578 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m1.363546347s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (61.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-748578 -- rollout status deployment/busybox: (2.827175687s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- exec busybox-769dd8b7dd-568fl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- exec busybox-769dd8b7dd-t7vv5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- exec busybox-769dd8b7dd-568fl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- exec busybox-769dd8b7dd-t7vv5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- exec busybox-769dd8b7dd-568fl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- exec busybox-769dd8b7dd-t7vv5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- exec busybox-769dd8b7dd-568fl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- exec busybox-769dd8b7dd-568fl -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- exec busybox-769dd8b7dd-t7vv5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-748578 -- exec busybox-769dd8b7dd-t7vv5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-748578 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-748578 -v=5 --alsologtostderr: (27.061121078s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-748578 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp testdata/cp-test.txt multinode-748578:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp multinode-748578:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4085742215/001/cp-test_multinode-748578.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp multinode-748578:/home/docker/cp-test.txt multinode-748578-m02:/home/docker/cp-test_multinode-748578_multinode-748578-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m02 "sudo cat /home/docker/cp-test_multinode-748578_multinode-748578-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp multinode-748578:/home/docker/cp-test.txt multinode-748578-m03:/home/docker/cp-test_multinode-748578_multinode-748578-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m03 "sudo cat /home/docker/cp-test_multinode-748578_multinode-748578-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp testdata/cp-test.txt multinode-748578-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp multinode-748578-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4085742215/001/cp-test_multinode-748578-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp multinode-748578-m02:/home/docker/cp-test.txt multinode-748578:/home/docker/cp-test_multinode-748578-m02_multinode-748578.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578 "sudo cat /home/docker/cp-test_multinode-748578-m02_multinode-748578.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp multinode-748578-m02:/home/docker/cp-test.txt multinode-748578-m03:/home/docker/cp-test_multinode-748578-m02_multinode-748578-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m03 "sudo cat /home/docker/cp-test_multinode-748578-m02_multinode-748578-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp testdata/cp-test.txt multinode-748578-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp multinode-748578-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4085742215/001/cp-test_multinode-748578-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp multinode-748578-m03:/home/docker/cp-test.txt multinode-748578:/home/docker/cp-test_multinode-748578-m03_multinode-748578.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578 "sudo cat /home/docker/cp-test_multinode-748578-m03_multinode-748578.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 cp multinode-748578-m03:/home/docker/cp-test.txt multinode-748578-m02:/home/docker/cp-test_multinode-748578-m03_multinode-748578-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 ssh -n multinode-748578-m02 "sudo cat /home/docker/cp-test_multinode-748578-m03_multinode-748578-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-748578 node stop m03: (1.270925968s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-748578 status: exit status 7 (484.630821ms)

                                                
                                                
-- stdout --
	multinode-748578
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-748578-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-748578-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-748578 status --alsologtostderr: exit status 7 (486.479114ms)

                                                
                                                
-- stdout --
	multinode-748578
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-748578-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-748578-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:26:00.883013  511180 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:26:00.883265  511180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:26:00.883274  511180 out.go:374] Setting ErrFile to fd 2...
	I1227 09:26:00.883278  511180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:26:00.883507  511180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:26:00.883705  511180 out.go:368] Setting JSON to false
	I1227 09:26:00.883735  511180 mustload.go:66] Loading cluster: multinode-748578
	I1227 09:26:00.883831  511180 notify.go:221] Checking for updates...
	I1227 09:26:00.884319  511180 config.go:182] Loaded profile config "multinode-748578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:26:00.884358  511180 status.go:174] checking status of multinode-748578 ...
	I1227 09:26:00.884969  511180 cli_runner.go:164] Run: docker container inspect multinode-748578 --format={{.State.Status}}
	I1227 09:26:00.903992  511180 status.go:371] multinode-748578 host status = "Running" (err=<nil>)
	I1227 09:26:00.904022  511180 host.go:66] Checking if "multinode-748578" exists ...
	I1227 09:26:00.904266  511180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-748578
	I1227 09:26:00.921279  511180 host.go:66] Checking if "multinode-748578" exists ...
	I1227 09:26:00.921569  511180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:26:00.921633  511180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-748578
	I1227 09:26:00.939060  511180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/multinode-748578/id_rsa Username:docker}
	I1227 09:26:01.026001  511180 ssh_runner.go:195] Run: systemctl --version
	I1227 09:26:01.032163  511180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:26:01.043847  511180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:26:01.096838  511180 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 09:26:01.086717666 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:26:01.097362  511180 kubeconfig.go:125] found "multinode-748578" server: "https://192.168.67.2:8443"
	I1227 09:26:01.097397  511180 api_server.go:166] Checking apiserver status ...
	I1227 09:26:01.097440  511180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:26:01.108905  511180 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1229/cgroup
	I1227 09:26:01.116984  511180 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1229/cgroup
	I1227 09:26:01.124359  511180 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-f0e327f94b5100aef56c13b0452591b93d200b76d822119e509b2db866f398ec.scope/container/cgroup.freeze
	I1227 09:26:01.131430  511180 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1227 09:26:01.135524  511180 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1227 09:26:01.135547  511180 status.go:463] multinode-748578 apiserver status = Running (err=<nil>)
	I1227 09:26:01.135557  511180 status.go:176] multinode-748578 status: &{Name:multinode-748578 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:26:01.135571  511180 status.go:174] checking status of multinode-748578-m02 ...
	I1227 09:26:01.135840  511180 cli_runner.go:164] Run: docker container inspect multinode-748578-m02 --format={{.State.Status}}
	I1227 09:26:01.153443  511180 status.go:371] multinode-748578-m02 host status = "Running" (err=<nil>)
	I1227 09:26:01.153467  511180 host.go:66] Checking if "multinode-748578-m02" exists ...
	I1227 09:26:01.153730  511180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-748578-m02
	I1227 09:26:01.172180  511180 host.go:66] Checking if "multinode-748578-m02" exists ...
	I1227 09:26:01.172520  511180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:26:01.172570  511180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-748578-m02
	I1227 09:26:01.189767  511180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/22343-373581/.minikube/machines/multinode-748578-m02/id_rsa Username:docker}
	I1227 09:26:01.277969  511180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:26:01.291139  511180 status.go:176] multinode-748578-m02 status: &{Name:multinode-748578-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:26:01.291173  511180 status.go:174] checking status of multinode-748578-m03 ...
	I1227 09:26:01.291485  511180 cli_runner.go:164] Run: docker container inspect multinode-748578-m03 --format={{.State.Status}}
	I1227 09:26:01.309910  511180 status.go:371] multinode-748578-m03 host status = "Stopped" (err=<nil>)
	I1227 09:26:01.309934  511180 status.go:384] host is not running, skipping remaining checks
	I1227 09:26:01.309942  511180 status.go:176] multinode-748578-m03 status: &{Name:multinode-748578-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-748578 node start m03 -v=5 --alsologtostderr: (6.278678944s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (59.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-748578
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-748578
E1227 09:26:18.670282  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-748578: (31.301352542s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-748578 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-748578 --wait=true -v=5 --alsologtostderr: (28.137421753s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-748578
--- PASS: TestMultiNode/serial/RestartKeepsNodes (59.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-748578 node delete m03: (4.379905167s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 stop
E1227 09:27:23.540445  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-748578 stop: (28.283574046s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-748578 status: exit status 7 (94.871404ms)

                                                
                                                
-- stdout --
	multinode-748578
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-748578-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-748578 status --alsologtostderr: exit status 7 (95.781539ms)

                                                
                                                
-- stdout --
	multinode-748578
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-748578-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:27:41.218278  520669 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:27:41.218520  520669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:27:41.218529  520669 out.go:374] Setting ErrFile to fd 2...
	I1227 09:27:41.218533  520669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:27:41.218701  520669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:27:41.218875  520669 out.go:368] Setting JSON to false
	I1227 09:27:41.218902  520669 mustload.go:66] Loading cluster: multinode-748578
	I1227 09:27:41.218967  520669 notify.go:221] Checking for updates...
	I1227 09:27:41.219377  520669 config.go:182] Loaded profile config "multinode-748578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:27:41.219407  520669 status.go:174] checking status of multinode-748578 ...
	I1227 09:27:41.220026  520669 cli_runner.go:164] Run: docker container inspect multinode-748578 --format={{.State.Status}}
	I1227 09:27:41.238941  520669 status.go:371] multinode-748578 host status = "Stopped" (err=<nil>)
	I1227 09:27:41.238984  520669 status.go:384] host is not running, skipping remaining checks
	I1227 09:27:41.239000  520669 status.go:176] multinode-748578 status: &{Name:multinode-748578 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:27:41.239034  520669 status.go:174] checking status of multinode-748578-m02 ...
	I1227 09:27:41.239297  520669 cli_runner.go:164] Run: docker container inspect multinode-748578-m02 --format={{.State.Status}}
	I1227 09:27:41.257067  520669 status.go:371] multinode-748578-m02 host status = "Stopped" (err=<nil>)
	I1227 09:27:41.257088  520669 status.go:384] host is not running, skipping remaining checks
	I1227 09:27:41.257093  520669 status.go:176] multinode-748578-m02 status: &{Name:multinode-748578-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-748578 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-748578 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (44.509270303s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-748578 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-748578
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-748578-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-748578-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.991914ms)

                                                
                                                
-- stdout --
	* [multinode-748578-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-748578-m02' is duplicated with machine name 'multinode-748578-m02' in profile 'multinode-748578'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-748578-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-748578-m03 --driver=docker  --container-runtime=crio: (16.293471596s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-748578
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-748578: exit status 80 (281.314466ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-748578 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-748578-m03 already exists in multinode-748578-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-748578-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-748578-m03: (2.312275565s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (19.02s)

                                                
                                    
x
+
TestScheduledStopUnix (94.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-342214 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-342214 --memory=3072 --driver=docker  --container-runtime=crio: (18.634953629s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-342214 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:29:08.163445  530451 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:29:08.163546  530451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:08.163558  530451 out.go:374] Setting ErrFile to fd 2...
	I1227 09:29:08.163564  530451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:08.163769  530451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:29:08.164059  530451 out.go:368] Setting JSON to false
	I1227 09:29:08.164148  530451 mustload.go:66] Loading cluster: scheduled-stop-342214
	I1227 09:29:08.164427  530451 config.go:182] Loaded profile config "scheduled-stop-342214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:29:08.164501  530451 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/scheduled-stop-342214/config.json ...
	I1227 09:29:08.164668  530451 mustload.go:66] Loading cluster: scheduled-stop-342214
	I1227 09:29:08.164805  530451 config.go:182] Loaded profile config "scheduled-stop-342214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-342214 -n scheduled-stop-342214
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-342214 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:29:08.555029  530627 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:29:08.555122  530627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:08.555130  530627 out.go:374] Setting ErrFile to fd 2...
	I1227 09:29:08.555134  530627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:08.555294  530627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:29:08.555527  530627 out.go:368] Setting JSON to false
	I1227 09:29:08.555713  530627 daemonize_unix.go:73] killing process 530487 as it is an old scheduled stop
	I1227 09:29:08.555840  530627 mustload.go:66] Loading cluster: scheduled-stop-342214
	I1227 09:29:08.556148  530627 config.go:182] Loaded profile config "scheduled-stop-342214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:29:08.556212  530627 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/scheduled-stop-342214/config.json ...
	I1227 09:29:08.556381  530627 mustload.go:66] Loading cluster: scheduled-stop-342214
	I1227 09:29:08.556474  530627 config.go:182] Loaded profile config "scheduled-stop-342214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1227 09:29:08.561437  377171 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/scheduled-stop-342214/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-342214 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-342214 -n scheduled-stop-342214
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-342214
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-342214 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:29:34.412118  531327 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:29:34.412231  531327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:34.412240  531327 out.go:374] Setting ErrFile to fd 2...
	I1227 09:29:34.412243  531327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:34.412488  531327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:29:34.412758  531327 out.go:368] Setting JSON to false
	I1227 09:29:34.412850  531327 mustload.go:66] Loading cluster: scheduled-stop-342214
	I1227 09:29:34.413157  531327 config.go:182] Loaded profile config "scheduled-stop-342214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:29:34.413231  531327 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/scheduled-stop-342214/config.json ...
	I1227 09:29:34.413430  531327 mustload.go:66] Loading cluster: scheduled-stop-342214
	I1227 09:29:34.413527  531327 config.go:182] Loaded profile config "scheduled-stop-342214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-342214
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-342214: exit status 7 (80.320921ms)

                                                
                                                
-- stdout --
	scheduled-stop-342214
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-342214 -n scheduled-stop-342214
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-342214 -n scheduled-stop-342214: exit status 7 (77.266093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-342214" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-342214
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-342214: (4.504188002s)
--- PASS: TestScheduledStopUnix (94.61s)

                                                
                                    
x
+
TestInsufficientStorage (8.56s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-457952 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-457952 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.15515781s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0280f2a0-b6e5-45a1-b47d-2be0e4f06bdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-457952] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"accfe713-9a67-489e-9e84-e3b6bfb897a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22343"}}
	{"specversion":"1.0","id":"888ac529-5502-42bf-ad58-3302839a82d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2bdf20d4-0731-4b3f-9f87-ae75be324af2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig"}}
	{"specversion":"1.0","id":"ab3174f9-6500-4366-ac2a-e95d9285c645","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube"}}
	{"specversion":"1.0","id":"89087a92-0cb4-4e04-9dfe-1403d061b924","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ed876ade-279a-468d-9969-45b0fa4c39d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8e39095a-3a1e-42e2-90fb-6dac5f567bbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"76cd329e-eb11-41cd-915e-9fa42675584c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e6a0351e-913d-431d-9413-1c88bf0a6c4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"770ae180-9514-4601-abe6-d0fd34954f10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"eb028196-15db-4506-ad55-d7521e021954","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-457952\" primary control-plane node in \"insufficient-storage-457952\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e50b6458-d71a-4647-93e4-5e2b404c2141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766570851-22316 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e67dbacd-a7a9-4d74-8731-95a12b1e50e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4e67995-739a-4cda-b5d8-4237c412672b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-457952 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-457952 --output=json --layout=cluster: exit status 7 (274.240083ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-457952","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-457952","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:30:30.497219  533849 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-457952" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-457952 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-457952 --output=json --layout=cluster: exit status 7 (273.450153ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-457952","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-457952","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:30:30.771941  533962 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-457952" does not appear in /home/jenkins/minikube-integration/22343-373581/kubeconfig
	E1227 09:30:30.782116  533962 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/insufficient-storage-457952/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-457952" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-457952
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-457952: (1.854438481s)
--- PASS: TestInsufficientStorage (8.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3325135122 start -p running-upgrade-561421 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3325135122 start -p running-upgrade-561421 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.866934056s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-561421 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-561421 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.276807109s)
helpers_test.go:176: Cleaning up "running-upgrade-561421" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-561421
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-561421: (2.598810246s)
--- PASS: TestRunningBinaryUpgrade (49.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (83.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1227 09:32:23.537192  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.186295904s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-761172 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-761172 --alsologtostderr: (1.886338872s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-761172 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-761172 status --format={{.Host}}: exit status 7 (77.055799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.697649608s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-761172 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (89.913784ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-761172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-761172
	    minikube start -p kubernetes-upgrade-761172 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7611722 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-761172 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-761172 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4.57979006s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-761172" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-761172
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-761172: (2.587589638s)
--- PASS: TestKubernetesUpgrade (83.18s)

                                                
                                    
x
+
TestMissingContainerUpgrade (84.17s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2373477459 start -p missing-upgrade-949641 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2373477459 start -p missing-upgrade-949641 --memory=3072 --driver=docker  --container-runtime=crio: (22.135518977s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-949641
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-949641: (10.470810922s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-949641
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-949641 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1227 09:32:41.714480  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-949641 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.898885236s)
helpers_test.go:176: Cleaning up "missing-upgrade-949641" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-949641
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-949641: (2.380949656s)
--- PASS: TestMissingContainerUpgrade (84.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.24s)

                                                
                                    
x
+
TestPause/serial/Start (50.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-174795 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-174795 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (50.235317809s)
--- PASS: TestPause/serial/Start (50.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-397662 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-397662 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (86.314644ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-397662] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-397662 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-397662 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.044948281s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-397662 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (306.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3778013566 start -p stopped-upgrade-196124 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3778013566 start -p stopped-upgrade-196124 --memory=3072 --vm-driver=docker  --container-runtime=crio: (45.301156652s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3778013566 -p stopped-upgrade-196124 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3778013566 -p stopped-upgrade-196124 stop: (1.870789626s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-196124 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-196124 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m19.708961781s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (306.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-397662 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1227 09:31:18.670987  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-397662 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (15.404191292s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-397662 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-397662 status -o json: exit status 2 (296.904456ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-397662","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-397662
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-397662: (2.034677036s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-397662 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-397662 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.728015684s)
--- PASS: TestNoKubernetes/serial/Start (4.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-174795 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-174795 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.526095187s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22343-373581/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-397662 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-397662 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.429939ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-157923 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-157923 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (166.39503ms)

                                                
                                                
-- stdout --
	* [false-157923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:31:28.041064  551580 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:31:28.041298  551580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:31:28.041305  551580 out.go:374] Setting ErrFile to fd 2...
	I1227 09:31:28.041310  551580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:31:28.041522  551580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-373581/.minikube/bin
	I1227 09:31:28.042003  551580 out.go:368] Setting JSON to false
	I1227 09:31:28.043072  551580 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4432,"bootTime":1766823456,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 09:31:28.043142  551580 start.go:143] virtualization: kvm guest
	I1227 09:31:28.044958  551580 out.go:179] * [false-157923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 09:31:28.046265  551580 notify.go:221] Checking for updates...
	I1227 09:31:28.046281  551580 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:31:28.047775  551580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:31:28.049112  551580 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-373581/kubeconfig
	I1227 09:31:28.050374  551580 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-373581/.minikube
	I1227 09:31:28.051464  551580 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 09:31:28.052528  551580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:31:28.054359  551580 config.go:182] Loaded profile config "NoKubernetes-397662": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1227 09:31:28.054519  551580 config.go:182] Loaded profile config "pause-174795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:31:28.054638  551580 config.go:182] Loaded profile config "stopped-upgrade-196124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 09:31:28.054757  551580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:31:28.081758  551580 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 09:31:28.081884  551580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:31:28.136241  551580 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-27 09:31:28.125922893 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 09:31:28.136347  551580 docker.go:319] overlay module found
	I1227 09:31:28.137664  551580 out.go:179] * Using the docker driver based on user configuration
	I1227 09:31:28.138660  551580 start.go:309] selected driver: docker
	I1227 09:31:28.138677  551580 start.go:928] validating driver "docker" against <nil>
	I1227 09:31:28.138688  551580 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:31:28.140362  551580 out.go:203] 
	W1227 09:31:28.141408  551580 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1227 09:31:28.142451  551580 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-157923 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-157923" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-157923" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 09:31:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-174795
contexts:
- context:
cluster: pause-174795
extensions:
- extension:
last-update: Sat, 27 Dec 2025 09:31:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-174795
name: pause-174795
current-context: pause-174795
kind: Config
users:
- name: pause-174795
user:
client-certificate: /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/pause-174795/client.crt
client-key: /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/pause-174795/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-157923

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-157923"

                                                
                                                
----------------------- debugLogs end: false-157923 [took: 3.631648795s] --------------------------------
helpers_test.go:176: Cleaning up "false-157923" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-157923
--- PASS: TestNetworkPlugins/group/false (3.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-397662
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-397662: (1.285955088s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-397662 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-397662 --driver=docker  --container-runtime=crio: (7.209839164s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-397662 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-397662 "sudo systemctl is-active --quiet service kubelet": exit status 1 (311.427287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (57.21s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-805186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-805186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (48.055913077s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-805186 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-linux-amd64 -p test-preload-805186 image pull ghcr.io/medyagh/image-mirrors/busybox:latest: (1.068447424s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-805186
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-805186: (8.083514997s)
--- PASS: TestPreload/Start-NoPreload-PullImage (57.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.061972864s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.06s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (44.28s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-805186 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-805186 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (44.034418962s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-805186 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (44.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (37.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (37.532935525s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (37.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-094398 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [27e89ec3-23a4-4ea1-a3ac-26f25e43f5ad] Pending
helpers_test.go:353: "busybox" [27e89ec3-23a4-4ea1-a3ac-26f25e43f5ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [27e89ec3-23a4-4ea1-a3ac-26f25e43f5ad] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003782851s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-094398 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-094398 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-094398 --alsologtostderr -v=3: (16.035790439s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (49.125756575s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094398 -n old-k8s-version-094398
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094398 -n old-k8s-version-094398: exit status 7 (110.976867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-094398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-094398 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.68732665s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094398 -n old-k8s-version-094398
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-196124
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-196124: (1.2509138s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (39.232355023s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-912564 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [274a708a-11df-4cc3-b67f-309656c1f9c6] Pending
helpers_test.go:353: "busybox" [274a708a-11df-4cc3-b67f-309656c1f9c6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [274a708a-11df-4cc3-b67f-309656c1f9c6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004182341s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-912564 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-912564 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-912564 --alsologtostderr -v=3: (17.053651237s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-912564 -n embed-certs-912564
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-912564 -n embed-certs-912564: exit status 7 (82.276805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-912564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 09:36:18.670370  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/functional-583037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-912564 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (44.035919873s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-912564 -n embed-certs-912564
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-963457 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [ad53255a-3a97-45fe-bf01-72e0602f22fa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [ad53255a-3a97-45fe-bf01-72e0602f22fa] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004693662s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-963457 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-497722 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e562f138-f6d6-49b7-a50f-2f3d20604171] Pending
helpers_test.go:353: "busybox" [e562f138-f6d6-49b7-a50f-2f3d20604171] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e562f138-f6d6-49b7-a50f-2f3d20604171] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00368271s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-497722 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5jv6d" [77fcfcb8-b434-4190-af5d-903ac4004b5c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003658618s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-963457 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-963457 --alsologtostderr -v=3: (18.123856778s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5jv6d" [77fcfcb8-b434-4190-af5d-903ac4004b5c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00367503s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-094398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-497722 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-497722 --alsologtostderr -v=3: (17.709087396s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-094398 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963457 -n no-preload-963457
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963457 -n no-preload-963457: exit status 7 (81.007932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-963457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-963457 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (50.991102728s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963457 -n no-preload-963457
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (23.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (23.140726936s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (23.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722: exit status 7 (93.342951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-497722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-497722 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (45.292198059s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-497722 -n default-k8s-diff-port-497722
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jlksn" [ae8f31ce-315e-41b1-b95d-055192b49c17] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00579842s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jlksn" [ae8f31ce-315e-41b1-b95d-055192b49c17] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00337083s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-912564 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-912564 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-246956 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-246956 --alsologtostderr -v=3: (10.063008724s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (37.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1227 09:37:23.536708  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/addons-102660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (37.275429364s)
--- PASS: TestNetworkPlugins/group/auto/Start (37.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-246956 -n newest-cni-246956
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-246956 -n newest-cni-246956: exit status 7 (84.672264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-246956 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-246956 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (10.16949715s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-246956 -n newest-cni-246956
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-246956 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-hlxhq" [75bc6020-12cf-45f1-9c2b-f568b2e502d1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00341899s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-f9pn7" [4a6b1fb6-89ce-456d-9bd5-f7e85e967eb7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004000786s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (38.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (38.61907401s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (38.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-hlxhq" [75bc6020-12cf-45f1-9c2b-f568b2e502d1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003628011s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-963457 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-f9pn7" [4a6b1fb6-89ce-456d-9bd5-f7e85e967eb7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003437173s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-497722 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-963457 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-497722 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-157923 "pgrep -a kubelet"
I1227 09:38:00.927283  377171 config.go:182] Loaded profile config "auto-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-157923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-8wmvd" [84a04f3f-6a47-43c5-b8a3-66acfee0f769] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-8wmvd" [84a04f3f-6a47-43c5-b8a3-66acfee0f769] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005194297s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (47.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (47.798901119s)
--- PASS: TestNetworkPlugins/group/calico/Start (47.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (52.674974492s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-157923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-r66tg" [59790819-cbbc-44b5-a5d3-60d937f8bc05] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003930609s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m1.271110895s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-157923 "pgrep -a kubelet"
I1227 09:38:31.744618  377171 config.go:182] Loaded profile config "kindnet-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-157923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-5rnqb" [670c3b25-6e4d-4527-8234-b2f79bd0c45c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-5rnqb" [670c3b25-6e4d-4527-8234-b2f79bd0c45c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004047226s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-157923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-mxrtp" [b32cccfd-fb6d-428e-b7e0-620f6d900389] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-mxrtp" [b32cccfd-fb6d-428e-b7e0-620f6d900389] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004701617s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-157923 "pgrep -a kubelet"
I1227 09:38:55.350406  377171 config.go:182] Loaded profile config "custom-flannel-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-157923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-x68pn" [25bc9cd5-d7d1-4c40-97a4-d4cbc182f531] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
I1227 09:38:55.853638  377171 config.go:182] Loaded profile config "calico-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
helpers_test.go:353: "netcat-5dd4ccdc4b-x68pn" [25bc9cd5-d7d1-4c40-97a4-d4cbc182f531] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003883749s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-157923 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-157923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rrrmh" [5d4d1ce6-468f-463e-93b8-4cad2e62dc14] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-rrrmh" [5d4d1ce6-468f-463e-93b8-4cad2e62dc14] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.004554785s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (46.912203275s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-157923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-157923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (57.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-157923 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (57.56591278s)
--- PASS: TestNetworkPlugins/group/bridge/Start (57.57s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (11.2s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-590200 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-590200 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (11.006677897s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-590200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-590200
--- PASS: TestPreload/PreloadSrc/gcs (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-157923 "pgrep -a kubelet"
I1227 09:39:33.030909  377171 config.go:182] Loaded profile config "enable-default-cni-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-157923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-bb2sr" [4a459de5-fb8d-4e0a-8d98-ec54d33f54db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-bb2sr" [4a459de5-fb8d-4e0a-8d98-ec54d33f54db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003627975s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (7.32s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-029627 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-029627 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (7.095243615s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-029627" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-029627
--- PASS: TestPreload/PreloadSrc/github (7.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-157923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.43s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-446346 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-446346" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-446346
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-ktv8l" [2ce43989-d037-4c41-abd1-ec46bad99379] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003889757s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-157923 "pgrep -a kubelet"
I1227 09:39:56.018263  377171 config.go:182] Loaded profile config "flannel-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-157923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rlxws" [8b117abf-99b7-4937-b4e3-26654a7c500f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-rlxws" [8b117abf-99b7-4937-b4e3-26654a7c500f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003576807s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-157923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-157923 "pgrep -a kubelet"
I1227 09:40:23.009431  377171 config.go:182] Loaded profile config "bridge-157923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-157923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-wgs2w" [b5594902-59fe-46e4-b570-36a3b6e8416a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-wgs2w" [b5594902-59fe-46e4-b570-36a3b6e8416a] Running
E1227 09:40:29.870362  377171 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/old-k8s-version-094398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004146757s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-157923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-157923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-917808" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-917808
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-157923 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-157923" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-157923" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 09:31:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-174795
contexts:
- context:
cluster: pause-174795
extensions:
- extension:
last-update: Sat, 27 Dec 2025 09:31:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-174795
name: pause-174795
current-context: pause-174795
kind: Config
users:
- name: pause-174795
user:
client-certificate: /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/pause-174795/client.crt
client-key: /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/pause-174795/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-157923

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-157923"

                                                
                                                
----------------------- debugLogs end: kubenet-157923 [took: 3.630887024s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-157923" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-157923
--- SKIP: TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-157923 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-157923" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 09:31:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-174795
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22343-373581/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 09:31:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-196124
contexts:
- context:
cluster: pause-174795
extensions:
- extension:
last-update: Sat, 27 Dec 2025 09:31:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-174795
name: pause-174795
- context:
cluster: stopped-upgrade-196124
user: stopped-upgrade-196124
name: stopped-upgrade-196124
current-context: stopped-upgrade-196124
kind: Config
users:
- name: pause-174795
user:
client-certificate: /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/pause-174795/client.crt
client-key: /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/pause-174795/client.key
- name: stopped-upgrade-196124
user:
client-certificate: /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124/client.crt
client-key: /home/jenkins/minikube-integration/22343-373581/.minikube/profiles/stopped-upgrade-196124/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-157923

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-157923" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-157923"

                                                
                                                
----------------------- debugLogs end: cilium-157923 [took: 4.042629507s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-157923" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-157923
--- SKIP: TestNetworkPlugins/group/cilium (4.20s)

                                                
                                    
Copied to clipboard