Test Report: Docker_Linux_crio 22402

                    
                      783b0304fb34eb1d9554b20c324bb66df0781ba8:2026-01-11:43196
                    
                

Test fail (26/332)

x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable volcano --alsologtostderr -v=1: exit status 11 (252.3745ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:01.139744   17586 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:01.139880   17586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:01.139889   17586 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:01.139893   17586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:01.140182   17586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:01.140492   17586 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:01.140788   17586 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:01.140821   17586 addons.go:622] checking whether the cluster is paused
	I0111 07:25:01.140908   17586 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:01.140926   17586 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:01.141915   17586 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:01.160059   17586 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:01.160099   17586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:01.178100   17586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:01.279275   17586 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:01.279376   17586 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:01.308831   17586 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:01.308857   17586 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:01.308861   17586 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:01.308864   17586 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:01.308867   17586 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:01.308871   17586 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:01.308873   17586 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:01.308876   17586 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:01.308879   17586 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:01.308884   17586 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:01.308887   17586 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:01.308890   17586 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:01.308893   17586 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:01.308896   17586 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:01.308899   17586 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:01.308914   17586 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:01.308919   17586 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:01.308923   17586 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:01.308926   17586 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:01.308929   17586 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:01.308934   17586 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:01.308937   17586 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:01.308939   17586 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:01.308942   17586 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:01.308944   17586 cri.go:96] found id: ""
	I0111 07:25:01.308995   17586 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:01.322967   17586 out.go:203] 
	W0111 07:25:01.324299   17586 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:01.324323   17586 out.go:285] * 
	* 
	W0111 07:25:01.325161   17586 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:01.326446   17586 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.131134ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-kpth5" [0bfa7dde-fb3b-431e-b149-2cd772930827] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002501226s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-npjfm" [66d3b5fb-2e47-407f-ad35-f70e2aa4041d] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003221842s
addons_test.go:394: (dbg) Run:  kubectl --context addons-171536 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-171536 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-171536 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.225213382s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable registry --alsologtostderr -v=1: exit status 11 (296.30272ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:22.665096   20148 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:22.665231   20148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:22.665237   20148 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:22.665241   20148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:22.665431   20148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:22.665697   20148 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:22.666112   20148 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:22.666151   20148 addons.go:622] checking whether the cluster is paused
	I0111 07:25:22.666289   20148 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:22.666326   20148 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:22.666714   20148 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:22.703126   20148 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:22.703198   20148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:22.744534   20148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:22.845274   20148 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:22.845378   20148 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:22.877092   20148 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:22.877113   20148 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:22.877126   20148 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:22.877131   20148 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:22.877136   20148 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:22.877141   20148 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:22.877146   20148 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:22.877151   20148 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:22.877155   20148 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:22.877165   20148 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:22.877175   20148 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:22.877180   20148 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:22.877187   20148 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:22.877192   20148 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:22.877200   20148 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:22.877216   20148 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:22.877223   20148 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:22.877229   20148 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:22.877237   20148 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:22.877242   20148 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:22.877252   20148 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:22.877260   20148 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:22.877265   20148 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:22.877272   20148 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:22.877276   20148 cri.go:96] found id: ""
	I0111 07:25:22.877323   20148 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:22.891289   20148 out.go:203] 
	W0111 07:25:22.892453   20148 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:22.892480   20148 out.go:285] * 
	* 
	W0111 07:25:22.893323   20148 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:22.894494   20148 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.77s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.843328ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-171536
addons_test.go:334: (dbg) Run:  kubectl --context addons-171536 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (241.786193ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:25.151254   20565 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:25.151549   20565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:25.151560   20565 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:25.151567   20565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:25.151777   20565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:25.152081   20565 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:25.152415   20565 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:25.152433   20565 addons.go:622] checking whether the cluster is paused
	I0111 07:25:25.152529   20565 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:25.152545   20565 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:25.152962   20565 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:25.171098   20565 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:25.171148   20565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:25.188621   20565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:25.289001   20565 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:25.289075   20565 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:25.319200   20565 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:25.319220   20565 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:25.319224   20565 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:25.319227   20565 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:25.319230   20565 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:25.319233   20565 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:25.319236   20565 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:25.319239   20565 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:25.319242   20565 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:25.319246   20565 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:25.319249   20565 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:25.319251   20565 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:25.319259   20565 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:25.319262   20565 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:25.319265   20565 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:25.319269   20565 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:25.319271   20565 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:25.319275   20565 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:25.319277   20565 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:25.319280   20565 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:25.319282   20565 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:25.319285   20565 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:25.319288   20565 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:25.319294   20565 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:25.319296   20565 cri.go:96] found id: ""
	I0111 07:25:25.319335   20565 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:25.332583   20565 out.go:203] 
	W0111 07:25:25.333885   20565 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:25.333903   20565 out.go:285] * 
	* 
	W0111 07:25:25.334544   20565 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:25.335840   20565 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (8.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-171536 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-171536 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-171536 replace --force -f testdata/nginx-pod-svc.yaml
2026/01/11 07:25:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [7d55cc9e-2504-4e77-a737-150deff4007d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [7d55cc9e-2504-4e77-a737-150deff4007d] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003790561s
I0111 07:25:29.745145    8238 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-171536 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (253.407407ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:30.535440   21389 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:30.535728   21389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:30.535739   21389 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:30.535744   21389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:30.535980   21389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:30.536391   21389 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:30.536729   21389 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:30.536747   21389 addons.go:622] checking whether the cluster is paused
	I0111 07:25:30.536900   21389 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:30.536917   21389 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:30.537298   21389 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:30.555785   21389 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:30.555858   21389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:30.575599   21389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:30.677385   21389 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:30.677462   21389 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:30.707156   21389 cri.go:96] found id: "1c8c4565bbb9023a0c83bd97dbae89b85f11d7b82df8c1b98d59b809c8010d86"
	I0111 07:25:30.707177   21389 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:30.707183   21389 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:30.707189   21389 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:30.707193   21389 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:30.707198   21389 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:30.707205   21389 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:30.707223   21389 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:30.707232   21389 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:30.707240   21389 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:30.707247   21389 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:30.707252   21389 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:30.707260   21389 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:30.707265   21389 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:30.707274   21389 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:30.707284   21389 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:30.707289   21389 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:30.707294   21389 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:30.707299   21389 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:30.707306   21389 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:30.707311   21389 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:30.707318   21389 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:30.707322   21389 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:30.707330   21389 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:30.707333   21389 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:30.707336   21389 cri.go:96] found id: ""
	I0111 07:25:30.707380   21389 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:30.721009   21389 out.go:203] 
	W0111 07:25:30.722274   21389 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:30.722294   21389 out.go:285] * 
	* 
	W0111 07:25:30.723032   21389 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:30.724275   21389 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable ingress --alsologtostderr -v=1: exit status 11 (271.967375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:30.799072   21450 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:30.799364   21450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:30.799374   21450 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:30.799379   21450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:30.799633   21450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:30.799899   21450 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:30.800208   21450 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:30.800222   21450 addons.go:622] checking whether the cluster is paused
	I0111 07:25:30.800302   21450 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:30.800313   21450 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:30.800656   21450 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:30.822115   21450 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:30.822187   21450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:30.843839   21450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:30.946711   21450 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:30.946815   21450 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:30.978945   21450 cri.go:96] found id: "1c8c4565bbb9023a0c83bd97dbae89b85f11d7b82df8c1b98d59b809c8010d86"
	I0111 07:25:30.978964   21450 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:30.978968   21450 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:30.978973   21450 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:30.978977   21450 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:30.978980   21450 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:30.978984   21450 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:30.978988   21450 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:30.978992   21450 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:30.978999   21450 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:30.979003   21450 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:30.979008   21450 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:30.979013   21450 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:30.979018   21450 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:30.979027   21450 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:30.979032   21450 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:30.979035   21450 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:30.979038   21450 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:30.979041   21450 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:30.979043   21450 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:30.979046   21450 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:30.979049   21450 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:30.979052   21450 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:30.979055   21450 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:30.979058   21450 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:30.979061   21450 cri.go:96] found id: ""
	I0111 07:25:30.979097   21450 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:30.993614   21450 out.go:203] 
	W0111 07:25:30.994723   21450 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:30.994740   21450 out.go:285] * 
	* 
	W0111 07:25:30.995478   21450 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:30.996660   21450 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (8.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-dc8tf" [809e9566-8471-40fd-bc29-6bc9db5b0f3d] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003482913s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (294.897334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:27.976017   20974 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:27.976380   20974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:27.976395   20974 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:27.976403   20974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:27.976701   20974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:27.977112   20974 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:27.977599   20974 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:27.977625   20974 addons.go:622] checking whether the cluster is paused
	I0111 07:25:27.977778   20974 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:27.977810   20974 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:27.978388   20974 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:28.001336   20974 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:28.001404   20974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:28.023826   20974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:28.133862   20974 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:28.133938   20974 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:28.169597   20974 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:28.169632   20974 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:28.169637   20974 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:28.169642   20974 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:28.169646   20974 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:28.169652   20974 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:28.169656   20974 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:28.169660   20974 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:28.169664   20974 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:28.169677   20974 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:28.169686   20974 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:28.169690   20974 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:28.169696   20974 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:28.169706   20974 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:28.169710   20974 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:28.169727   20974 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:28.169733   20974 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:28.169739   20974 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:28.169743   20974 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:28.169748   20974 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:28.169752   20974 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:28.169757   20974 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:28.169762   20974 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:28.169766   20974 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:28.169770   20974 cri.go:96] found id: ""
	I0111 07:25:28.169864   20974 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:28.188928   20974 out.go:203] 
	W0111 07:25:28.190088   20974 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:28.190108   20974 out.go:285] * 
	* 
	W0111 07:25:28.191105   20974 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:28.192391   20974 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.85384ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-v4bk9" [4dbdf9d6-d8d5-4310-88f7-d1e64748af7f] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003354869s
addons_test.go:465: (dbg) Run:  kubectl --context addons-171536 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (241.147827ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:19.508412   19528 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:19.508697   19528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:19.508706   19528 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:19.508711   19528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:19.508924   19528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:19.509179   19528 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:19.509486   19528 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:19.509499   19528 addons.go:622] checking whether the cluster is paused
	I0111 07:25:19.509577   19528 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:19.509586   19528 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:19.509934   19528 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:19.528215   19528 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:19.528267   19528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:19.545264   19528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:19.646422   19528 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:19.646507   19528 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:19.675133   19528 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:19.675155   19528 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:19.675159   19528 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:19.675162   19528 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:19.675165   19528 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:19.675169   19528 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:19.675173   19528 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:19.675177   19528 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:19.675180   19528 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:19.675187   19528 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:19.675191   19528 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:19.675207   19528 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:19.675216   19528 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:19.675220   19528 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:19.675225   19528 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:19.675231   19528 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:19.675235   19528 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:19.675238   19528 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:19.675240   19528 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:19.675243   19528 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:19.675246   19528 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:19.675249   19528 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:19.675252   19528 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:19.675255   19528 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:19.675258   19528 cri.go:96] found id: ""
	I0111 07:25:19.675303   19528 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:19.688692   19528 out.go:203] 
	W0111 07:25:19.689950   19528 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:19.689966   19528 out.go:285] * 
	* 
	W0111 07:25:19.690590   19528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:19.691749   19528 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0111 07:25:16.242364    8238 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0111 07:25:16.245623    8238 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0111 07:25:16.245646    8238 kapi.go:107] duration metric: took 3.295619ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.305907ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-171536 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-171536 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [7cc7ed70-1d5d-4238-bcb9-664861a1fe54] Pending
helpers_test.go:353: "task-pv-pod" [7cc7ed70-1d5d-4238-bcb9-664861a1fe54] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003840988s
addons_test.go:574: (dbg) Run:  kubectl --context addons-171536 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-171536 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-171536 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-171536 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-171536 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-171536 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-171536 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [bfbc6c15-cffe-4deb-99c6-08a285f356e9] Pending
helpers_test.go:353: "task-pv-pod-restore" [bfbc6c15-cffe-4deb-99c6-08a285f356e9] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003762325s
addons_test.go:616: (dbg) Run:  kubectl --context addons-171536 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-171536 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-171536 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (245.021714ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:54.878784   22434 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:54.878960   22434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:54.878971   22434 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:54.878975   22434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:54.879151   22434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:54.879419   22434 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:54.879716   22434 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:54.879728   22434 addons.go:622] checking whether the cluster is paused
	I0111 07:25:54.879817   22434 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:54.879828   22434 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:54.880202   22434 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:54.897852   22434 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:54.897910   22434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:54.916699   22434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:55.017119   22434 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:55.017214   22434 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:55.049709   22434 cri.go:96] found id: "1c8c4565bbb9023a0c83bd97dbae89b85f11d7b82df8c1b98d59b809c8010d86"
	I0111 07:25:55.049737   22434 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:55.049742   22434 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:55.049745   22434 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:55.049748   22434 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:55.049751   22434 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:55.049754   22434 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:55.049756   22434 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:55.049759   22434 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:55.049764   22434 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:55.049772   22434 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:55.049776   22434 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:55.049781   22434 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:55.049785   22434 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:55.049790   22434 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:55.049819   22434 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:55.049825   22434 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:55.049832   22434 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:55.049839   22434 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:55.049842   22434 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:55.049847   22434 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:55.049852   22434 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:55.049855   22434 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:55.049858   22434 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:55.049861   22434 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:55.049863   22434 cri.go:96] found id: ""
	I0111 07:25:55.049914   22434 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:55.063941   22434 out.go:203] 
	W0111 07:25:55.065204   22434 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:55.065235   22434 out.go:285] * 
	* 
	W0111 07:25:55.066166   22434 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:55.067404   22434 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (238.271193ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:55.123570   22496 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:55.123888   22496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:55.123905   22496 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:55.123912   22496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:55.124101   22496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:55.124399   22496 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:55.124747   22496 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:55.124766   22496 addons.go:622] checking whether the cluster is paused
	I0111 07:25:55.124884   22496 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:55.124907   22496 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:55.125287   22496 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:55.142931   22496 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:55.143003   22496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:55.161355   22496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:55.261151   22496 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:55.261257   22496 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:55.289651   22496 cri.go:96] found id: "1c8c4565bbb9023a0c83bd97dbae89b85f11d7b82df8c1b98d59b809c8010d86"
	I0111 07:25:55.289670   22496 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:55.289674   22496 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:55.289678   22496 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:55.289680   22496 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:55.289683   22496 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:55.289686   22496 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:55.289689   22496 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:55.289692   22496 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:55.289698   22496 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:55.289701   22496 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:55.289704   22496 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:55.289707   22496 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:55.289718   22496 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:55.289725   22496 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:55.289729   22496 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:55.289732   22496 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:55.289735   22496 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:55.289738   22496 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:55.289740   22496 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:55.289743   22496 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:55.289746   22496 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:55.289749   22496 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:55.289756   22496 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:55.289759   22496 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:55.289762   22496 cri.go:96] found id: ""
	I0111 07:25:55.289813   22496 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:55.303359   22496 out.go:203] 
	W0111 07:25:55.304630   22496 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:55.304648   22496 out.go:285] * 
	* 
	W0111 07:25:55.305339   22496 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:55.306459   22496 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (39.07s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-171536 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-171536 --alsologtostderr -v=1: exit status 11 (268.385394ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:09.191564   17950 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:09.191940   17950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:09.191957   17950 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:09.191965   17950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:09.192280   17950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:09.192707   17950 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:09.193227   17950 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:09.193255   17950 addons.go:622] checking whether the cluster is paused
	I0111 07:25:09.193404   17950 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:09.193428   17950 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:09.194068   17950 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:09.212392   17950 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:09.212442   17950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:09.231587   17950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:09.334228   17950 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:09.334320   17950 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:09.368587   17950 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:09.368608   17950 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:09.368612   17950 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:09.368616   17950 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:09.368619   17950 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:09.368622   17950 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:09.368625   17950 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:09.368628   17950 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:09.368631   17950 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:09.368636   17950 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:09.368640   17950 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:09.368644   17950 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:09.368648   17950 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:09.368653   17950 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:09.368657   17950 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:09.368670   17950 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:09.368675   17950 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:09.368682   17950 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:09.368686   17950 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:09.368691   17950 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:09.368696   17950 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:09.368701   17950 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:09.368710   17950 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:09.368713   17950 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:09.368716   17950 cri.go:96] found id: ""
	I0111 07:25:09.368762   17950 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:09.387150   17950 out.go:203] 
	W0111 07:25:09.389284   17950 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:09.389306   17950 out.go:285] * 
	* 
	W0111 07:25:09.390289   17950 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:09.392192   17950 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-171536 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-171536
helpers_test.go:244: (dbg) docker inspect addons-171536:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "137ee80391416cb2f3f09b203419e88270343f07263544982dc24c35eba79f5b",
	        "Created": "2026-01-11T07:23:54.346359708Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 10248,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:23:54.393497006Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/137ee80391416cb2f3f09b203419e88270343f07263544982dc24c35eba79f5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/137ee80391416cb2f3f09b203419e88270343f07263544982dc24c35eba79f5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/137ee80391416cb2f3f09b203419e88270343f07263544982dc24c35eba79f5b/hosts",
	        "LogPath": "/var/lib/docker/containers/137ee80391416cb2f3f09b203419e88270343f07263544982dc24c35eba79f5b/137ee80391416cb2f3f09b203419e88270343f07263544982dc24c35eba79f5b-json.log",
	        "Name": "/addons-171536",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-171536:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-171536",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "137ee80391416cb2f3f09b203419e88270343f07263544982dc24c35eba79f5b",
	                "LowerDir": "/var/lib/docker/overlay2/6d0ce7c1675e934fcd42e8b37b9edc69b13e595380b3130454fbcd95fe14fe4f-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d0ce7c1675e934fcd42e8b37b9edc69b13e595380b3130454fbcd95fe14fe4f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d0ce7c1675e934fcd42e8b37b9edc69b13e595380b3130454fbcd95fe14fe4f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d0ce7c1675e934fcd42e8b37b9edc69b13e595380b3130454fbcd95fe14fe4f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-171536",
	                "Source": "/var/lib/docker/volumes/addons-171536/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-171536",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-171536",
	                "name.minikube.sigs.k8s.io": "addons-171536",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "55ea208e52aba9cb01908a67cbb9de09c7a438abd5a6130de117edb50d560484",
	            "SandboxKey": "/var/run/docker/netns/55ea208e52ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-171536": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f6f4133615a2fadde066d4b27815913182dfee0890019eed40641306148dea7",
	                    "EndpointID": "331fe0d709c10933517b78fe04964e505c0ee06978002b13f7bc4cd6eab615d8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ba:1e:48:5d:f6:b0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-171536",
	                        "137ee8039141"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-171536 -n addons-171536
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-171536 logs -n 25: (1.14203976s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-928847 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-928847   │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ delete  │ -p download-only-928847                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-928847   │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ start   │ -o=json --download-only -p download-only-913516 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-913516   │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ delete  │ -p download-only-913516                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-913516   │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ delete  │ -p download-only-928847                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-928847   │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ delete  │ -p download-only-913516                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-913516   │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ start   │ --download-only -p download-docker-564084 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-564084 │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │                     │
	│ delete  │ -p download-docker-564084                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-564084 │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ start   │ --download-only -p binary-mirror-682885 --alsologtostderr --binary-mirror http://127.0.0.1:38307 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-682885   │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │                     │
	│ delete  │ -p binary-mirror-682885                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-682885   │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ addons  │ enable dashboard -p addons-171536                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-171536          │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │                     │
	│ addons  │ disable dashboard -p addons-171536                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-171536          │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │                     │
	│ start   │ -p addons-171536 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-171536          │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:25 UTC │
	│ addons  │ addons-171536 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-171536          │ jenkins │ v1.37.0 │ 11 Jan 26 07:25 UTC │                     │
	│ addons  │ addons-171536 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-171536          │ jenkins │ v1.37.0 │ 11 Jan 26 07:25 UTC │                     │
	│ addons  │ enable headlamp -p addons-171536 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-171536          │ jenkins │ v1.37.0 │ 11 Jan 26 07:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:23:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:23:30.654981    9587 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:23:30.655070    9587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:23:30.655077    9587 out.go:374] Setting ErrFile to fd 2...
	I0111 07:23:30.655083    9587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:23:30.655280    9587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:23:30.655740    9587 out.go:368] Setting JSON to false
	I0111 07:23:30.656497    9587 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":359,"bootTime":1768115852,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:23:30.656545    9587 start.go:143] virtualization: kvm guest
	I0111 07:23:30.658460    9587 out.go:179] * [addons-171536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:23:30.659594    9587 notify.go:221] Checking for updates...
	I0111 07:23:30.659601    9587 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:23:30.660667    9587 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:23:30.661887    9587 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:23:30.663112    9587 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:23:30.664101    9587 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:23:30.665255    9587 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:23:30.666531    9587 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:23:30.687430    9587 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:23:30.687567    9587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:23:30.740501    9587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2026-01-11 07:23:30.73115093 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:23:30.740595    9587 docker.go:319] overlay module found
	I0111 07:23:30.742125    9587 out.go:179] * Using the docker driver based on user configuration
	I0111 07:23:30.743181    9587 start.go:309] selected driver: docker
	I0111 07:23:30.743195    9587 start.go:928] validating driver "docker" against <nil>
	I0111 07:23:30.743206    9587 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:23:30.743719    9587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:23:30.796382    9587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2026-01-11 07:23:30.787633782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:23:30.796516    9587 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:23:30.796724    9587 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:23:30.798500    9587 out.go:179] * Using Docker driver with root privileges
	I0111 07:23:30.799793    9587 cni.go:84] Creating CNI manager for ""
	I0111 07:23:30.799874    9587 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:23:30.799885    9587 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 07:23:30.799940    9587 start.go:353] cluster config:
	{Name:addons-171536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-171536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:23:30.801080    9587 out.go:179] * Starting "addons-171536" primary control-plane node in "addons-171536" cluster
	I0111 07:23:30.801952    9587 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:23:30.803026    9587 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:23:30.804119    9587 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:23:30.804147    9587 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:23:30.804153    9587 cache.go:65] Caching tarball of preloaded images
	I0111 07:23:30.804215    9587 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:23:30.804243    9587 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:23:30.804254    9587 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:23:30.804667    9587 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/config.json ...
	I0111 07:23:30.804695    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/config.json: {Name:mk8c00a0e07ea013ac90c9d1f5814a7fd4a2c7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:30.819737    9587 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 07:23:30.819881    9587 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory
	I0111 07:23:30.819904    9587 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory, skipping pull
	I0111 07:23:30.819910    9587 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in cache, skipping pull
	I0111 07:23:30.819923    9587 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 as a tarball
	I0111 07:23:30.819933    9587 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 from local cache
	I0111 07:23:43.557060    9587 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 from cached tarball
	I0111 07:23:43.557101    9587 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:23:43.557177    9587 start.go:360] acquireMachinesLock for addons-171536: {Name:mkdc6406899360bbd96a762cec08fda52722b188 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:23:43.557291    9587 start.go:364] duration metric: took 90.88µs to acquireMachinesLock for "addons-171536"
	I0111 07:23:43.557321    9587 start.go:93] Provisioning new machine with config: &{Name:addons-171536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-171536 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:23:43.557426    9587 start.go:125] createHost starting for "" (driver="docker")
	I0111 07:23:43.559651    9587 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0111 07:23:43.559901    9587 start.go:159] libmachine.API.Create for "addons-171536" (driver="docker")
	I0111 07:23:43.559939    9587 client.go:173] LocalClient.Create starting
	I0111 07:23:43.560061    9587 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem
	I0111 07:23:43.627010    9587 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem
	I0111 07:23:43.885492    9587 cli_runner.go:164] Run: docker network inspect addons-171536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 07:23:43.902924    9587 cli_runner.go:211] docker network inspect addons-171536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 07:23:43.903007    9587 network_create.go:284] running [docker network inspect addons-171536] to gather additional debugging logs...
	I0111 07:23:43.903030    9587 cli_runner.go:164] Run: docker network inspect addons-171536
	W0111 07:23:43.918864    9587 cli_runner.go:211] docker network inspect addons-171536 returned with exit code 1
	I0111 07:23:43.918891    9587 network_create.go:287] error running [docker network inspect addons-171536]: docker network inspect addons-171536: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-171536 not found
	I0111 07:23:43.918909    9587 network_create.go:289] output of [docker network inspect addons-171536]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-171536 not found
	
	** /stderr **
	I0111 07:23:43.919012    9587 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:23:43.936006    9587 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ceac80}
	I0111 07:23:43.936041    9587 network_create.go:124] attempt to create docker network addons-171536 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0111 07:23:43.936079    9587 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-171536 addons-171536
	I0111 07:23:43.980986    9587 network_create.go:108] docker network addons-171536 192.168.49.0/24 created
	I0111 07:23:43.981017    9587 kic.go:121] calculated static IP "192.168.49.2" for the "addons-171536" container
	I0111 07:23:43.981084    9587 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 07:23:43.996450    9587 cli_runner.go:164] Run: docker volume create addons-171536 --label name.minikube.sigs.k8s.io=addons-171536 --label created_by.minikube.sigs.k8s.io=true
	I0111 07:23:44.013755    9587 oci.go:103] Successfully created a docker volume addons-171536
	I0111 07:23:44.013850    9587 cli_runner.go:164] Run: docker run --rm --name addons-171536-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-171536 --entrypoint /usr/bin/test -v addons-171536:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 07:23:50.560861    9587 cli_runner.go:217] Completed: docker run --rm --name addons-171536-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-171536 --entrypoint /usr/bin/test -v addons-171536:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib: (6.546916834s)
	I0111 07:23:50.560896    9587 oci.go:107] Successfully prepared a docker volume addons-171536
	I0111 07:23:50.560965    9587 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:23:50.560977    9587 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 07:23:50.561047    9587 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-171536:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 07:23:54.278430    9587 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-171536:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.717330438s)
	I0111 07:23:54.278461    9587 kic.go:203] duration metric: took 3.717480469s to extract preloaded images to volume ...
	W0111 07:23:54.278541    9587 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0111 07:23:54.278568    9587 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0111 07:23:54.278602    9587 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 07:23:54.331360    9587 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-171536 --name addons-171536 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-171536 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-171536 --network addons-171536 --ip 192.168.49.2 --volume addons-171536:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 07:23:54.620738    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Running}}
	I0111 07:23:54.639254    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:23:54.658075    9587 cli_runner.go:164] Run: docker exec addons-171536 stat /var/lib/dpkg/alternatives/iptables
	I0111 07:23:54.704896    9587 oci.go:144] the created container "addons-171536" has a running status.
	I0111 07:23:54.704950    9587 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa...
	I0111 07:23:54.795032    9587 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 07:23:54.819600    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:23:54.839319    9587 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 07:23:54.839346    9587 kic_runner.go:114] Args: [docker exec --privileged addons-171536 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 07:23:54.884978    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:23:54.910719    9587 machine.go:94] provisionDockerMachine start ...
	I0111 07:23:54.910850    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:23:54.934568    9587 main.go:144] libmachine: Using SSH client type: native
	I0111 07:23:54.935017    9587 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0111 07:23:54.935031    9587 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 07:23:55.088349    9587 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-171536
	
	I0111 07:23:55.088376    9587 ubuntu.go:182] provisioning hostname "addons-171536"
	I0111 07:23:55.088435    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:23:55.107879    9587 main.go:144] libmachine: Using SSH client type: native
	I0111 07:23:55.108080    9587 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0111 07:23:55.108094    9587 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-171536 && echo "addons-171536" | sudo tee /etc/hostname
	I0111 07:23:55.260829    9587 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-171536
	
	I0111 07:23:55.260911    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:23:55.279642    9587 main.go:144] libmachine: Using SSH client type: native
	I0111 07:23:55.279897    9587 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0111 07:23:55.279915    9587 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-171536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-171536/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-171536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 07:23:55.423081    9587 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 07:23:55.423126    9587 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-4689/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-4689/.minikube}
	I0111 07:23:55.423179    9587 ubuntu.go:190] setting up certificates
	I0111 07:23:55.423193    9587 provision.go:84] configureAuth start
	I0111 07:23:55.423259    9587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-171536
	I0111 07:23:55.440824    9587 provision.go:143] copyHostCerts
	I0111 07:23:55.440919    9587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem (1082 bytes)
	I0111 07:23:55.441051    9587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem (1123 bytes)
	I0111 07:23:55.441139    9587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem (1675 bytes)
	I0111 07:23:55.441221    9587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem org=jenkins.addons-171536 san=[127.0.0.1 192.168.49.2 addons-171536 localhost minikube]
	I0111 07:23:55.480938    9587 provision.go:177] copyRemoteCerts
	I0111 07:23:55.481006    9587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 07:23:55.481048    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:23:55.499066    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:23:55.599713    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 07:23:55.618329    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0111 07:23:55.634305    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 07:23:55.650149    9587 provision.go:87] duration metric: took 226.939745ms to configureAuth
	I0111 07:23:55.650178    9587 ubuntu.go:206] setting minikube options for container-runtime
	I0111 07:23:55.650380    9587 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:23:55.650492    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:23:55.667636    9587 main.go:144] libmachine: Using SSH client type: native
	I0111 07:23:55.667893    9587 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0111 07:23:55.667914    9587 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 07:23:55.948667    9587 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 07:23:55.948696    9587 machine.go:97] duration metric: took 1.037942637s to provisionDockerMachine
	I0111 07:23:55.948706    9587 client.go:176] duration metric: took 12.388757797s to LocalClient.Create
	I0111 07:23:55.948726    9587 start.go:167] duration metric: took 12.388824976s to libmachine.API.Create "addons-171536"
	I0111 07:23:55.948736    9587 start.go:293] postStartSetup for "addons-171536" (driver="docker")
	I0111 07:23:55.948747    9587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 07:23:55.948793    9587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 07:23:55.948859    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:23:55.966175    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:23:56.068423    9587 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 07:23:56.071817    9587 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 07:23:56.071839    9587 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 07:23:56.071849    9587 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/addons for local assets ...
	I0111 07:23:56.071898    9587 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/files for local assets ...
	I0111 07:23:56.071921    9587 start.go:296] duration metric: took 123.179335ms for postStartSetup
	I0111 07:23:56.072169    9587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-171536
	I0111 07:23:56.089982    9587 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/config.json ...
	I0111 07:23:56.090256    9587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:23:56.090310    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:23:56.108357    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:23:56.206579    9587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 07:23:56.210728    9587 start.go:128] duration metric: took 12.653289896s to createHost
	I0111 07:23:56.210760    9587 start.go:83] releasing machines lock for "addons-171536", held for 12.653455558s
	I0111 07:23:56.210836    9587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-171536
	I0111 07:23:56.227429    9587 ssh_runner.go:195] Run: cat /version.json
	I0111 07:23:56.227473    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:23:56.227550    9587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 07:23:56.227628    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:23:56.245465    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:23:56.245917    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:23:56.409959    9587 ssh_runner.go:195] Run: systemctl --version
	I0111 07:23:56.416065    9587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 07:23:56.448734    9587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 07:23:56.453156    9587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 07:23:56.453219    9587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 07:23:56.477381    9587 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0111 07:23:56.477405    9587 start.go:496] detecting cgroup driver to use...
	I0111 07:23:56.477436    9587 detect.go:178] detected "systemd" cgroup driver on host os
	I0111 07:23:56.477484    9587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 07:23:56.491971    9587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 07:23:56.503575    9587 docker.go:218] disabling cri-docker service (if available) ...
	I0111 07:23:56.503626    9587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 07:23:56.518539    9587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 07:23:56.535275    9587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 07:23:56.616433    9587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 07:23:56.700066    9587 docker.go:234] disabling docker service ...
	I0111 07:23:56.700133    9587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 07:23:56.717450    9587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 07:23:56.729592    9587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 07:23:56.809966    9587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 07:23:56.890440    9587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 07:23:56.902425    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 07:23:56.916451    9587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 07:23:56.916499    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:23:56.926032    9587 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 07:23:56.926187    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:23:56.934753    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:23:56.942903    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:23:56.950933    9587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 07:23:56.958411    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:23:56.966494    9587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:23:56.978863    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:23:56.986718    9587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 07:23:56.993457    9587 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0111 07:23:56.993495    9587 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0111 07:23:57.004657    9587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 07:23:57.011265    9587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:23:57.087141    9587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 07:23:57.217071    9587 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 07:23:57.217164    9587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 07:23:57.221214    9587 start.go:574] Will wait 60s for crictl version
	I0111 07:23:57.221270    9587 ssh_runner.go:195] Run: which crictl
	I0111 07:23:57.224760    9587 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 07:23:57.247854    9587 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 07:23:57.247967    9587 ssh_runner.go:195] Run: crio --version
	I0111 07:23:57.274610    9587 ssh_runner.go:195] Run: crio --version
	I0111 07:23:57.302379    9587 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 07:23:57.303307    9587 cli_runner.go:164] Run: docker network inspect addons-171536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:23:57.319341    9587 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0111 07:23:57.323191    9587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:23:57.332742    9587 kubeadm.go:884] updating cluster {Name:addons-171536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-171536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 07:23:57.332860    9587 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:23:57.332920    9587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:23:57.364147    9587 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:23:57.364165    9587 crio.go:433] Images already preloaded, skipping extraction
	I0111 07:23:57.364224    9587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:23:57.387524    9587 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:23:57.387544    9587 cache_images.go:86] Images are preloaded, skipping loading
	I0111 07:23:57.387551    9587 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I0111 07:23:57.387632    9587 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-171536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-171536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 07:23:57.387691    9587 ssh_runner.go:195] Run: crio config
	I0111 07:23:57.430094    9587 cni.go:84] Creating CNI manager for ""
	I0111 07:23:57.430113    9587 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:23:57.430127    9587 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 07:23:57.430152    9587 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-171536 NodeName:addons-171536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 07:23:57.430290    9587 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-171536"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 07:23:57.430356    9587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 07:23:57.438014    9587 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 07:23:57.438086    9587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 07:23:57.445394    9587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0111 07:23:57.456952    9587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 07:23:57.471130    9587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0111 07:23:57.482696    9587 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0111 07:23:57.486020    9587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:23:57.494991    9587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:23:57.573087    9587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:23:57.593739    9587 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536 for IP: 192.168.49.2
	I0111 07:23:57.593763    9587 certs.go:195] generating shared ca certs ...
	I0111 07:23:57.593779    9587 certs.go:227] acquiring lock for ca certs: {Name:mk9f7a57f61b85f2c0f52e44b6a926769e368f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:57.593949    9587 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key
	I0111 07:23:57.713930    9587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt ...
	I0111 07:23:57.713957    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt: {Name:mk61457c6289f53d4f5a59c382806adbc6986dd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:57.714117    9587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key ...
	I0111 07:23:57.714130    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key: {Name:mka258fce703ffa5815b17208780397526a4c879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:57.714203    9587 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key
	I0111 07:23:57.768850    9587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt ...
	I0111 07:23:57.768873    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt: {Name:mkc326b4deb1d69655223e3f5f74de51f4e7a4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:57.769011    9587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key ...
	I0111 07:23:57.769022    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key: {Name:mk4c47959990a373f9004af3b6c51da47636d1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:57.769086    9587 certs.go:257] generating profile certs ...
	I0111 07:23:57.769137    9587 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.key
	I0111 07:23:57.769156    9587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt with IP's: []
	I0111 07:23:57.881976    9587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt ...
	I0111 07:23:57.882011    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: {Name:mk8710cbc7833a92e028c047d5a765092557658f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:57.882196    9587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.key ...
	I0111 07:23:57.882209    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.key: {Name:mk2678144c951466c58f0444cbb27ffe90e7aead Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:57.882295    9587 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.key.aff12fd7
	I0111 07:23:57.882316    9587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.crt.aff12fd7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0111 07:23:57.933475    9587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.crt.aff12fd7 ...
	I0111 07:23:57.933503    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.crt.aff12fd7: {Name:mkca04ba532142ab48d9b71d132d17c02686e930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:57.933661    9587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.key.aff12fd7 ...
	I0111 07:23:57.933675    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.key.aff12fd7: {Name:mk77f33feb19bee1b68a8a7c8ba2b046ab8c483c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:57.933753    9587 certs.go:382] copying /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.crt.aff12fd7 -> /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.crt
	I0111 07:23:57.933881    9587 certs.go:386] copying /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.key.aff12fd7 -> /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.key
	I0111 07:23:57.933949    9587 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/proxy-client.key
	I0111 07:23:57.933969    9587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/proxy-client.crt with IP's: []
	I0111 07:23:58.121056    9587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/proxy-client.crt ...
	I0111 07:23:58.121089    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/proxy-client.crt: {Name:mk77013e61b91f18bc8582ebafd2aa05d90e22ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:58.121263    9587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/proxy-client.key ...
	I0111 07:23:58.121276    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/proxy-client.key: {Name:mk5a1bc6fac6c44610a89d20bf01d4a8d4f0f6e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:58.121493    9587 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 07:23:58.121534    9587 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem (1082 bytes)
	I0111 07:23:58.121570    9587 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem (1123 bytes)
	I0111 07:23:58.121603    9587 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem (1675 bytes)
	I0111 07:23:58.122207    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 07:23:58.139436    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 07:23:58.155781    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 07:23:58.171768    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 07:23:58.187862    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0111 07:23:58.203861    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 07:23:58.219576    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 07:23:58.235235    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 07:23:58.251012    9587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 07:23:58.269058    9587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 07:23:58.281091    9587 ssh_runner.go:195] Run: openssl version
	I0111 07:23:58.287503    9587 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:23:58.294962    9587 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 07:23:58.305054    9587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:23:58.308861    9587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:23 /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:23:58.308917    9587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:23:58.342109    9587 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 07:23:58.349376    9587 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 07:23:58.356091    9587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 07:23:58.359278    9587 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 07:23:58.359327    9587 kubeadm.go:401] StartCluster: {Name:addons-171536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-171536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:23:58.359413    9587 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:23:58.359450    9587 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:23:58.384212    9587 cri.go:96] found id: ""
	I0111 07:23:58.384265    9587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 07:23:58.391446    9587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 07:23:58.398430    9587 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 07:23:58.398468    9587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 07:23:58.405286    9587 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 07:23:58.405304    9587 kubeadm.go:158] found existing configuration files:
	
	I0111 07:23:58.405336    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 07:23:58.412334    9587 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 07:23:58.412383    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 07:23:58.418894    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 07:23:58.425697    9587 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 07:23:58.425741    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 07:23:58.432348    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 07:23:58.439099    9587 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 07:23:58.439140    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 07:23:58.445733    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 07:23:58.452626    9587 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 07:23:58.452673    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 07:23:58.459699    9587 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 07:23:58.492258    9587 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 07:23:58.492322    9587 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 07:23:58.551499    9587 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 07:23:58.551579    9587 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I0111 07:23:58.551621    9587 kubeadm.go:319] OS: Linux
	I0111 07:23:58.551679    9587 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 07:23:58.551747    9587 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 07:23:58.551839    9587 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 07:23:58.551907    9587 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 07:23:58.551984    9587 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 07:23:58.552057    9587 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 07:23:58.552123    9587 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 07:23:58.552185    9587 kubeadm.go:319] CGROUPS_IO: enabled
	I0111 07:23:58.604192    9587 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 07:23:58.604360    9587 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 07:23:58.604498    9587 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 07:23:58.611615    9587 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 07:23:58.615614    9587 out.go:252]   - Generating certificates and keys ...
	I0111 07:23:58.615690    9587 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 07:23:58.615755    9587 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 07:23:58.741420    9587 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 07:23:58.850820    9587 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 07:23:58.907337    9587 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 07:23:59.133463    9587 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 07:23:59.250542    9587 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 07:23:59.250689    9587 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-171536 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0111 07:23:59.403538    9587 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 07:23:59.403656    9587 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-171536 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0111 07:23:59.447287    9587 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 07:23:59.648656    9587 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 07:23:59.737664    9587 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 07:23:59.737754    9587 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 07:23:59.778631    9587 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 07:23:59.818311    9587 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 07:23:59.942074    9587 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 07:23:59.992180    9587 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 07:24:00.244795    9587 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 07:24:00.245290    9587 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 07:24:00.248735    9587 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 07:24:00.251708    9587 out.go:252]   - Booting up control plane ...
	I0111 07:24:00.251790    9587 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 07:24:00.251876    9587 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 07:24:00.251950    9587 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 07:24:00.263816    9587 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 07:24:00.263962    9587 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 07:24:00.270846    9587 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 07:24:00.271066    9587 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 07:24:00.271124    9587 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 07:24:00.365080    9587 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 07:24:00.365247    9587 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 07:24:00.866775    9587 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.793148ms
	I0111 07:24:00.869592    9587 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0111 07:24:00.869695    9587 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0111 07:24:00.869779    9587 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0111 07:24:00.869874    9587 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0111 07:24:01.375526    9587 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.840953ms
	I0111 07:24:02.552900    9587 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.683068529s
	I0111 07:24:04.370923    9587 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501215167s
	I0111 07:24:04.386789    9587 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 07:24:04.395728    9587 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 07:24:04.403527    9587 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 07:24:04.403790    9587 kubeadm.go:319] [mark-control-plane] Marking the node addons-171536 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 07:24:04.411572    9587 kubeadm.go:319] [bootstrap-token] Using token: js8wq0.9gekyqzhu8pdo59q
	I0111 07:24:04.412882    9587 out.go:252]   - Configuring RBAC rules ...
	I0111 07:24:04.413018    9587 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 07:24:04.415504    9587 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 07:24:04.420065    9587 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 07:24:04.422248    9587 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 07:24:04.425110    9587 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 07:24:04.427329    9587 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 07:24:04.777860    9587 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 07:24:05.190249    9587 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 07:24:05.775850    9587 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 07:24:05.776708    9587 kubeadm.go:319] 
	I0111 07:24:05.776769    9587 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 07:24:05.776794    9587 kubeadm.go:319] 
	I0111 07:24:05.776922    9587 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 07:24:05.776930    9587 kubeadm.go:319] 
	I0111 07:24:05.776951    9587 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 07:24:05.777004    9587 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 07:24:05.777071    9587 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 07:24:05.777087    9587 kubeadm.go:319] 
	I0111 07:24:05.777166    9587 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 07:24:05.777179    9587 kubeadm.go:319] 
	I0111 07:24:05.777261    9587 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 07:24:05.777271    9587 kubeadm.go:319] 
	I0111 07:24:05.777311    9587 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 07:24:05.777375    9587 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 07:24:05.777434    9587 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 07:24:05.777439    9587 kubeadm.go:319] 
	I0111 07:24:05.777504    9587 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 07:24:05.777594    9587 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 07:24:05.777605    9587 kubeadm.go:319] 
	I0111 07:24:05.777720    9587 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token js8wq0.9gekyqzhu8pdo59q \
	I0111 07:24:05.777869    9587 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c84f368e6b8058d9b8f5e45cbb9f009ab0e06984b3c0d5786b368fcef9c9a8fc \
	I0111 07:24:05.777896    9587 kubeadm.go:319] 	--control-plane 
	I0111 07:24:05.777900    9587 kubeadm.go:319] 
	I0111 07:24:05.777992    9587 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 07:24:05.778017    9587 kubeadm.go:319] 
	I0111 07:24:05.778111    9587 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token js8wq0.9gekyqzhu8pdo59q \
	I0111 07:24:05.778246    9587 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c84f368e6b8058d9b8f5e45cbb9f009ab0e06984b3c0d5786b368fcef9c9a8fc 
	I0111 07:24:05.779985    9587 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I0111 07:24:05.780075    9587 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 07:24:05.780118    9587 cni.go:84] Creating CNI manager for ""
	I0111 07:24:05.780136    9587 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:24:05.781787    9587 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 07:24:05.782971    9587 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 07:24:05.786942    9587 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0111 07:24:05.786956    9587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 07:24:05.799888    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 07:24:05.988357    9587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 07:24:05.988438    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:24:05.988450    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-171536 minikube.k8s.io/updated_at=2026_01_11T07_24_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=addons-171536 minikube.k8s.io/primary=true
	I0111 07:24:06.071383    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:24:06.071381    9587 ops.go:34] apiserver oom_adj: -16
	I0111 07:24:06.572158    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:24:07.072322    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:24:07.572031    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:24:08.072261    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:24:08.571529    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:24:09.072467    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:24:09.572412    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:24:10.072033    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:24:10.571837    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:24:10.637827    9587 kubeadm.go:1114] duration metric: took 4.649450256s to wait for elevateKubeSystemPrivileges
	I0111 07:24:10.637874    9587 kubeadm.go:403] duration metric: took 12.278551676s to StartCluster
	I0111 07:24:10.637898    9587 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:24:10.638054    9587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:24:10.638504    9587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:24:10.638731    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 07:24:10.638756    9587 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:24:10.638825    9587 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0111 07:24:10.638949    9587 addons.go:70] Setting yakd=true in profile "addons-171536"
	I0111 07:24:10.638969    9587 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:24:10.638982    9587 addons.go:239] Setting addon yakd=true in "addons-171536"
	I0111 07:24:10.639015    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.639026    9587 addons.go:70] Setting inspektor-gadget=true in profile "addons-171536"
	I0111 07:24:10.639042    9587 addons.go:239] Setting addon inspektor-gadget=true in "addons-171536"
	I0111 07:24:10.639049    9587 addons.go:70] Setting default-storageclass=true in profile "addons-171536"
	I0111 07:24:10.639068    9587 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-171536"
	I0111 07:24:10.639080    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.639090    9587 addons.go:70] Setting registry-creds=true in profile "addons-171536"
	I0111 07:24:10.639118    9587 addons.go:239] Setting addon registry-creds=true in "addons-171536"
	I0111 07:24:10.639165    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.639163    9587 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-171536"
	I0111 07:24:10.639231    9587 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-171536"
	I0111 07:24:10.639392    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.639535    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.639546    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.639553    9587 addons.go:70] Setting storage-provisioner=true in profile "addons-171536"
	I0111 07:24:10.639569    9587 addons.go:239] Setting addon storage-provisioner=true in "addons-171536"
	I0111 07:24:10.639592    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.639584    9587 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-171536"
	I0111 07:24:10.639625    9587 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-171536"
	I0111 07:24:10.639676    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.639694    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.639860    9587 addons.go:70] Setting ingress=true in profile "addons-171536"
	I0111 07:24:10.639892    9587 addons.go:239] Setting addon ingress=true in "addons-171536"
	I0111 07:24:10.639922    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.640070    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.640206    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.640376    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.640696    9587 addons.go:70] Setting gcp-auth=true in profile "addons-171536"
	I0111 07:24:10.640758    9587 mustload.go:66] Loading cluster: addons-171536
	I0111 07:24:10.641266    9587 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-171536"
	I0111 07:24:10.641297    9587 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-171536"
	I0111 07:24:10.641326    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.641487    9587 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:24:10.641556    9587 out.go:179] * Verifying Kubernetes components...
	I0111 07:24:10.639540    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.641711    9587 addons.go:70] Setting ingress-dns=true in profile "addons-171536"
	I0111 07:24:10.641728    9587 addons.go:239] Setting addon ingress-dns=true in "addons-171536"
	I0111 07:24:10.641825    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.641851    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.642880    9587 addons.go:70] Setting registry=true in profile "addons-171536"
	I0111 07:24:10.642906    9587 addons.go:239] Setting addon registry=true in "addons-171536"
	I0111 07:24:10.642932    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.643433    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.643708    9587 addons.go:70] Setting cloud-spanner=true in profile "addons-171536"
	I0111 07:24:10.643735    9587 addons.go:239] Setting addon cloud-spanner=true in "addons-171536"
	I0111 07:24:10.643764    9587 addons.go:70] Setting metrics-server=true in profile "addons-171536"
	I0111 07:24:10.643786    9587 addons.go:239] Setting addon metrics-server=true in "addons-171536"
	I0111 07:24:10.643816    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.643849    9587 addons.go:70] Setting volcano=true in profile "addons-171536"
	I0111 07:24:10.643868    9587 addons.go:239] Setting addon volcano=true in "addons-171536"
	I0111 07:24:10.643884    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.643909    9587 addons.go:70] Setting volumesnapshots=true in profile "addons-171536"
	I0111 07:24:10.643930    9587 addons.go:239] Setting addon volumesnapshots=true in "addons-171536"
	I0111 07:24:10.643963    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.643826    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.643835    9587 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-171536"
	I0111 07:24:10.644312    9587 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-171536"
	I0111 07:24:10.644342    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.644165    9587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:24:10.649294    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.649603    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.650491    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.650831    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.652209    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.652372    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.661625    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.682971    9587 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.7
	I0111 07:24:10.684262    9587 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0111 07:24:10.684281    9587 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0111 07:24:10.684353    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.695046    9587 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.48.0
	I0111 07:24:10.698459    9587 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0111 07:24:10.698481    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0111 07:24:10.698536    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.703743    9587 addons.go:239] Setting addon default-storageclass=true in "addons-171536"
	I0111 07:24:10.710765    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.711280    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.714562    9587 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0111 07:24:10.715763    9587 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0111 07:24:10.715793    9587 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0111 07:24:10.715881    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.721230    9587 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0111 07:24:10.721489    9587 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:24:10.723569    9587 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0111 07:24:10.723598    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0111 07:24:10.723668    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.725766    9587 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:24:10.725789    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:24:10.725856    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.729092    9587 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0111 07:24:10.731408    9587 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0111 07:24:10.731427    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0111 07:24:10.731488    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.732760    9587 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I0111 07:24:10.734287    9587 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0111 07:24:10.734304    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0111 07:24:10.734418    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.739043    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.743761    9587 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0111 07:24:10.744985    9587 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I0111 07:24:10.746279    9587 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0111 07:24:10.747865    9587 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0111 07:24:10.747905    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I0111 07:24:10.748074    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.750002    9587 out.go:179]   - Using image docker.io/registry:3.0.0
	I0111 07:24:10.751186    9587 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	W0111 07:24:10.751502    9587 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0111 07:24:10.752842    9587 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I0111 07:24:10.752865    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0111 07:24:10.752923    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.753490    9587 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0111 07:24:10.754673    9587 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0111 07:24:10.754751    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0111 07:24:10.754892    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.755103    9587 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-171536"
	I0111 07:24:10.755243    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.755878    9587 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0111 07:24:10.756929    9587 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0111 07:24:10.756945    9587 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0111 07:24:10.756994    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.758046    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:10.758313    9587 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0111 07:24:10.760592    9587 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0111 07:24:10.761382    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:10.762719    9587 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0111 07:24:10.763886    9587 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0111 07:24:10.765845    9587 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0111 07:24:10.773292    9587 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I0111 07:24:10.774657    9587 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I0111 07:24:10.774733    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0111 07:24:10.774845    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.779563    9587 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0111 07:24:10.782741    9587 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:24:10.785616    9587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:24:10.785678    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.788641    9587 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0111 07:24:10.791030    9587 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0111 07:24:10.792701    9587 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0111 07:24:10.792779    9587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0111 07:24:10.793101    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.799271    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.801367    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.802333    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 07:24:10.805884    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.811061    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.818714    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.822944    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.823402    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.826915    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.831669    9587 out.go:179]   - Using image docker.io/busybox:stable
	I0111 07:24:10.833257    9587 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0111 07:24:10.834954    9587 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0111 07:24:10.834974    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0111 07:24:10.835036    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:10.839195    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.843250    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.846335    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.848864    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.861434    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	W0111 07:24:10.873229    9587 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0111 07:24:10.873294    9587 retry.go:84] will retry after 300ms: ssh: handshake failed: EOF
	I0111 07:24:10.880941    9587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:24:10.886027    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:10.956297    9587 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0111 07:24:10.956319    9587 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0111 07:24:10.973215    9587 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0111 07:24:10.973237    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0111 07:24:10.977269    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0111 07:24:10.991558    9587 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0111 07:24:10.991580    9587 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0111 07:24:11.002915    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I0111 07:24:11.004878    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0111 07:24:11.008199    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0111 07:24:11.011550    9587 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0111 07:24:11.011568    9587 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0111 07:24:11.016900    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0111 07:24:11.025425    9587 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0111 07:24:11.025502    9587 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0111 07:24:11.026544    9587 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0111 07:24:11.026559    9587 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0111 07:24:11.027916    9587 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0111 07:24:11.028135    9587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0111 07:24:11.053306    9587 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0111 07:24:11.053334    9587 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0111 07:24:11.057126    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:24:11.064640    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0111 07:24:11.064967    9587 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I0111 07:24:11.064986    9587 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0111 07:24:11.066709    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0111 07:24:11.067430    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0111 07:24:11.075056    9587 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0111 07:24:11.075078    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I0111 07:24:11.083052    9587 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0111 07:24:11.083140    9587 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0111 07:24:11.087292    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0111 07:24:11.092561    9587 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0111 07:24:11.092582    9587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0111 07:24:11.110171    9587 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0111 07:24:11.110278    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0111 07:24:11.124245    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0111 07:24:11.140713    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0111 07:24:11.142227    9587 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0111 07:24:11.142318    9587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0111 07:24:11.143228    9587 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0111 07:24:11.143247    9587 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0111 07:24:11.202020    9587 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0111 07:24:11.202051    9587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0111 07:24:11.218873    9587 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0111 07:24:11.218905    9587 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0111 07:24:11.251093    9587 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0111 07:24:11.251122    9587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0111 07:24:11.287955    9587 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0111 07:24:11.287995    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0111 07:24:11.310995    9587 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0111 07:24:11.311026    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0111 07:24:11.354750    9587 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0111 07:24:11.356668    9587 node_ready.go:35] waiting up to 6m0s for node "addons-171536" to be "Ready" ...
	I0111 07:24:11.363698    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0111 07:24:11.365574    9587 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0111 07:24:11.365596    9587 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0111 07:24:11.411032    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:24:11.419717    9587 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0111 07:24:11.419745    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0111 07:24:11.504779    9587 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0111 07:24:11.504820    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0111 07:24:11.570331    9587 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0111 07:24:11.570362    9587 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0111 07:24:11.632494    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0111 07:24:11.872174    9587 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-171536" context rescaled to 1 replicas
	I0111 07:24:12.138606    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.13364493s)
	I0111 07:24:12.138670    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.135597172s)
	I0111 07:24:12.138670    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.130449394s)
	I0111 07:24:12.138691    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.121732738s)
	I0111 07:24:12.138763    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.081563358s)
	I0111 07:24:12.382934    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.316194162s)
	I0111 07:24:12.383031    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.315542993s)
	I0111 07:24:12.383304    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.295985598s)
	I0111 07:24:12.383324    9587 addons.go:495] Verifying addon metrics-server=true in "addons-171536"
	I0111 07:24:12.383390    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.259114862s)
	I0111 07:24:12.383704    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.318982636s)
	I0111 07:24:12.383728    9587 addons.go:495] Verifying addon ingress=true in "addons-171536"
	I0111 07:24:12.383747    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.242804091s)
	I0111 07:24:12.383814    9587 addons.go:495] Verifying addon registry=true in "addons-171536"
	I0111 07:24:12.396409    9587 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-171536 service yakd-dashboard -n yakd-dashboard
	
	I0111 07:24:12.407084    9587 out.go:179] * Verifying registry addon...
	I0111 07:24:12.407091    9587 out.go:179] * Verifying ingress addon...
	I0111 07:24:12.412814    9587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0111 07:24:12.413951    9587 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0111 07:24:12.418254    9587 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0111 07:24:12.418280    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:12.418705    9587 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0111 07:24:12.418719    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:12.907908    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.544163506s)
	W0111 07:24:12.907963    9587 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0111 07:24:12.908010    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.496879667s)
	I0111 07:24:12.908223    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.275678962s)
	I0111 07:24:12.908263    9587 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-171536"
	I0111 07:24:12.909677    9587 out.go:179] * Verifying csi-hostpath-driver addon...
	I0111 07:24:12.911955    9587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0111 07:24:12.914236    9587 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0111 07:24:12.914257    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:12.918474    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:12.918655    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:13.057939    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0111 07:24:13.360075    9587 node_ready.go:57] node "addons-171536" has "Ready":"False" status (will retry)
	I0111 07:24:13.415402    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:13.415568    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:13.416444    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:13.915543    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:13.915544    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:13.916351    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:14.415142    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:14.415475    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:14.416326    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:14.914748    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:14.915369    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:14.916213    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0111 07:24:15.360140    9587 node_ready.go:57] node "addons-171536" has "Ready":"False" status (will retry)
	I0111 07:24:15.415204    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:15.415218    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:15.416700    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:15.531757    9587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.473775961s)
	I0111 07:24:15.915929    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:15.915929    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:15.916103    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:16.416077    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:16.416263    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:16.416320    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:16.915687    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:16.915722    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:16.916690    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:17.415162    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:17.415525    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:17.416560    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0111 07:24:17.859378    9587 node_ready.go:57] node "addons-171536" has "Ready":"False" status (will retry)
	I0111 07:24:17.915268    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:17.915817    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:17.916694    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:18.376687    9587 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0111 07:24:18.376764    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:18.394515    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:18.415548    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:18.415664    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:18.418416    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:18.500553    9587 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0111 07:24:18.513050    9587 addons.go:239] Setting addon gcp-auth=true in "addons-171536"
	I0111 07:24:18.513100    9587 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:24:18.513493    9587 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:24:18.531118    9587 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0111 07:24:18.531173    9587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:24:18.547943    9587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:24:18.647854    9587 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0111 07:24:18.649203    9587 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0111 07:24:18.650493    9587 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0111 07:24:18.650509    9587 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0111 07:24:18.664017    9587 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0111 07:24:18.664045    9587 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0111 07:24:18.676581    9587 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0111 07:24:18.676599    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0111 07:24:18.688653    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0111 07:24:18.914977    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:18.915633    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:18.916790    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:18.985054    9587 addons.go:495] Verifying addon gcp-auth=true in "addons-171536"
	I0111 07:24:18.986437    9587 out.go:179] * Verifying gcp-auth addon...
	I0111 07:24:18.988557    9587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0111 07:24:18.991938    9587 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0111 07:24:18.991963    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:19.415140    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:19.415279    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:19.416939    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:19.491340    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0111 07:24:19.860180    9587 node_ready.go:57] node "addons-171536" has "Ready":"False" status (will retry)
	I0111 07:24:19.914763    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:19.915193    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:19.916450    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:19.991900    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:20.415740    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:20.415740    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:20.416959    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:20.491682    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:20.915391    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:20.915443    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:20.915952    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:20.991589    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:21.415074    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:21.415842    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:21.416838    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:21.491360    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:21.914613    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:21.915333    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:21.916473    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:21.991753    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0111 07:24:22.359419    9587 node_ready.go:57] node "addons-171536" has "Ready":"False" status (will retry)
	I0111 07:24:22.414673    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:22.415327    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:22.416207    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:22.491672    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:22.915347    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:22.915383    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:22.915936    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:22.991095    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:23.416155    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:23.416244    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:23.416291    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:23.491563    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:23.858696    9587 node_ready.go:49] node "addons-171536" is "Ready"
	I0111 07:24:23.858755    9587 node_ready.go:38] duration metric: took 12.502055702s for node "addons-171536" to be "Ready" ...
	I0111 07:24:23.858790    9587 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:24:23.858857    9587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:24:23.874837    9587 api_server.go:72] duration metric: took 13.236043109s to wait for apiserver process to appear ...
	I0111 07:24:23.874867    9587 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:24:23.874904    9587 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0111 07:24:23.880415    9587 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0111 07:24:23.881233    9587 api_server.go:141] control plane version: v1.35.0
	I0111 07:24:23.881258    9587 api_server.go:131] duration metric: took 6.384793ms to wait for apiserver health ...
	I0111 07:24:23.881267    9587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:24:23.887154    9587 system_pods.go:59] 20 kube-system pods found
	I0111 07:24:23.887206    9587 system_pods.go:61] "amd-gpu-device-plugin-xz9v9" [cf7f647a-3dcb-409b-9b1e-6772fb528f34] Pending
	I0111 07:24:23.887219    9587 system_pods.go:61] "coredns-7d764666f9-c5mz2" [a0e15b1c-d415-44c7-854b-9b1e47f40987] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:24:23.887228    9587 system_pods.go:61] "csi-hostpath-attacher-0" [805ed0db-3375-453c-99cc-ad33b90caee7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0111 07:24:23.887244    9587 system_pods.go:61] "csi-hostpath-resizer-0" [98c22a47-d299-4f34-85ea-2c01d002fa87] Pending
	I0111 07:24:23.887253    9587 system_pods.go:61] "csi-hostpathplugin-pqz6c" [42e82e1d-ea95-4aca-a92e-59f4403eea1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0111 07:24:23.887261    9587 system_pods.go:61] "etcd-addons-171536" [81dad9bf-4422-4bb5-8bc4-d0de7b4c279b] Running
	I0111 07:24:23.887267    9587 system_pods.go:61] "kindnet-8rpxw" [e121b2e7-25ee-4ca0-8a6b-bab4d1fc82b9] Running
	I0111 07:24:23.887279    9587 system_pods.go:61] "kube-apiserver-addons-171536" [733e888d-efd7-4563-8d2a-202ac3dec5db] Running
	I0111 07:24:23.887287    9587 system_pods.go:61] "kube-controller-manager-addons-171536" [b01f293a-7e2e-4927-b2c9-e8ea38b2b496] Running
	I0111 07:24:23.887304    9587 system_pods.go:61] "kube-ingress-dns-minikube" [9c7ffc99-1dd3-4247-8fbb-db3eb0b271a2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0111 07:24:23.887314    9587 system_pods.go:61] "kube-proxy-nx2qs" [fccffe39-d254-440b-ab80-1b4f89b82427] Running
	I0111 07:24:23.887323    9587 system_pods.go:61] "kube-scheduler-addons-171536" [dbfa82d9-a7e1-461d-8269-e9e902d43e34] Running
	I0111 07:24:23.887331    9587 system_pods.go:61] "metrics-server-5778bb4788-v4bk9" [4dbdf9d6-d8d5-4310-88f7-d1e64748af7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0111 07:24:23.887340    9587 system_pods.go:61] "nvidia-device-plugin-daemonset-zmrbr" [159b4a1d-9576-4833-83cf-df9e8385d4d7] Pending
	I0111 07:24:23.887345    9587 system_pods.go:61] "registry-788cd7d5bc-kpth5" [0bfa7dde-fb3b-431e-b149-2cd772930827] Pending
	I0111 07:24:23.887361    9587 system_pods.go:61] "registry-creds-567fb78d95-zq5dk" [08286141-05a5-426c-8156-61dfe02f5902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0111 07:24:23.887369    9587 system_pods.go:61] "registry-proxy-npjfm" [66d3b5fb-2e47-407f-ad35-f70e2aa4041d] Pending
	I0111 07:24:23.887382    9587 system_pods.go:61] "snapshot-controller-6588d87457-nkfdp" [8160ca1a-b263-4370-b050-4807daa254ad] Pending
	I0111 07:24:23.887390    9587 system_pods.go:61] "snapshot-controller-6588d87457-vrlp4" [e7949d9f-29fc-4e0f-9fe3-564b17504c08] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 07:24:23.887401    9587 system_pods.go:61] "storage-provisioner" [e46d98de-d2ed-4988-9e35-faaec4eccc3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:24:23.887417    9587 system_pods.go:74] duration metric: took 6.14478ms to wait for pod list to return data ...
	I0111 07:24:23.887426    9587 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:24:23.890044    9587 default_sa.go:45] found service account: "default"
	I0111 07:24:23.890061    9587 default_sa.go:55] duration metric: took 2.629041ms for default service account to be created ...
	I0111 07:24:23.890069    9587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:24:23.893909    9587 system_pods.go:86] 20 kube-system pods found
	I0111 07:24:23.893935    9587 system_pods.go:89] "amd-gpu-device-plugin-xz9v9" [cf7f647a-3dcb-409b-9b1e-6772fb528f34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0111 07:24:23.893944    9587 system_pods.go:89] "coredns-7d764666f9-c5mz2" [a0e15b1c-d415-44c7-854b-9b1e47f40987] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:24:23.893951    9587 system_pods.go:89] "csi-hostpath-attacher-0" [805ed0db-3375-453c-99cc-ad33b90caee7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0111 07:24:23.893955    9587 system_pods.go:89] "csi-hostpath-resizer-0" [98c22a47-d299-4f34-85ea-2c01d002fa87] Pending
	I0111 07:24:23.893961    9587 system_pods.go:89] "csi-hostpathplugin-pqz6c" [42e82e1d-ea95-4aca-a92e-59f4403eea1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0111 07:24:23.893964    9587 system_pods.go:89] "etcd-addons-171536" [81dad9bf-4422-4bb5-8bc4-d0de7b4c279b] Running
	I0111 07:24:23.893970    9587 system_pods.go:89] "kindnet-8rpxw" [e121b2e7-25ee-4ca0-8a6b-bab4d1fc82b9] Running
	I0111 07:24:23.893977    9587 system_pods.go:89] "kube-apiserver-addons-171536" [733e888d-efd7-4563-8d2a-202ac3dec5db] Running
	I0111 07:24:23.893981    9587 system_pods.go:89] "kube-controller-manager-addons-171536" [b01f293a-7e2e-4927-b2c9-e8ea38b2b496] Running
	I0111 07:24:23.893986    9587 system_pods.go:89] "kube-ingress-dns-minikube" [9c7ffc99-1dd3-4247-8fbb-db3eb0b271a2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0111 07:24:23.893990    9587 system_pods.go:89] "kube-proxy-nx2qs" [fccffe39-d254-440b-ab80-1b4f89b82427] Running
	I0111 07:24:23.893993    9587 system_pods.go:89] "kube-scheduler-addons-171536" [dbfa82d9-a7e1-461d-8269-e9e902d43e34] Running
	I0111 07:24:23.893998    9587 system_pods.go:89] "metrics-server-5778bb4788-v4bk9" [4dbdf9d6-d8d5-4310-88f7-d1e64748af7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0111 07:24:23.894002    9587 system_pods.go:89] "nvidia-device-plugin-daemonset-zmrbr" [159b4a1d-9576-4833-83cf-df9e8385d4d7] Pending
	I0111 07:24:23.894005    9587 system_pods.go:89] "registry-788cd7d5bc-kpth5" [0bfa7dde-fb3b-431e-b149-2cd772930827] Pending
	I0111 07:24:23.894010    9587 system_pods.go:89] "registry-creds-567fb78d95-zq5dk" [08286141-05a5-426c-8156-61dfe02f5902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0111 07:24:23.894015    9587 system_pods.go:89] "registry-proxy-npjfm" [66d3b5fb-2e47-407f-ad35-f70e2aa4041d] Pending
	I0111 07:24:23.894018    9587 system_pods.go:89] "snapshot-controller-6588d87457-nkfdp" [8160ca1a-b263-4370-b050-4807daa254ad] Pending
	I0111 07:24:23.894024    9587 system_pods.go:89] "snapshot-controller-6588d87457-vrlp4" [e7949d9f-29fc-4e0f-9fe3-564b17504c08] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 07:24:23.894032    9587 system_pods.go:89] "storage-provisioner" [e46d98de-d2ed-4988-9e35-faaec4eccc3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:24:23.894049    9587 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0111 07:24:23.916545    9587 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0111 07:24:23.916575    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:23.917190    9587 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0111 07:24:23.917214    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:23.917243    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:24.018200    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:24.122428    9587 system_pods.go:86] 20 kube-system pods found
	I0111 07:24:24.122470    9587 system_pods.go:89] "amd-gpu-device-plugin-xz9v9" [cf7f647a-3dcb-409b-9b1e-6772fb528f34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0111 07:24:24.122480    9587 system_pods.go:89] "coredns-7d764666f9-c5mz2" [a0e15b1c-d415-44c7-854b-9b1e47f40987] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:24:24.122489    9587 system_pods.go:89] "csi-hostpath-attacher-0" [805ed0db-3375-453c-99cc-ad33b90caee7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0111 07:24:24.122497    9587 system_pods.go:89] "csi-hostpath-resizer-0" [98c22a47-d299-4f34-85ea-2c01d002fa87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0111 07:24:24.122508    9587 system_pods.go:89] "csi-hostpathplugin-pqz6c" [42e82e1d-ea95-4aca-a92e-59f4403eea1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0111 07:24:24.122514    9587 system_pods.go:89] "etcd-addons-171536" [81dad9bf-4422-4bb5-8bc4-d0de7b4c279b] Running
	I0111 07:24:24.122523    9587 system_pods.go:89] "kindnet-8rpxw" [e121b2e7-25ee-4ca0-8a6b-bab4d1fc82b9] Running
	I0111 07:24:24.122529    9587 system_pods.go:89] "kube-apiserver-addons-171536" [733e888d-efd7-4563-8d2a-202ac3dec5db] Running
	I0111 07:24:24.122534    9587 system_pods.go:89] "kube-controller-manager-addons-171536" [b01f293a-7e2e-4927-b2c9-e8ea38b2b496] Running
	I0111 07:24:24.122556    9587 system_pods.go:89] "kube-ingress-dns-minikube" [9c7ffc99-1dd3-4247-8fbb-db3eb0b271a2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0111 07:24:24.122565    9587 system_pods.go:89] "kube-proxy-nx2qs" [fccffe39-d254-440b-ab80-1b4f89b82427] Running
	I0111 07:24:24.122571    9587 system_pods.go:89] "kube-scheduler-addons-171536" [dbfa82d9-a7e1-461d-8269-e9e902d43e34] Running
	I0111 07:24:24.122580    9587 system_pods.go:89] "metrics-server-5778bb4788-v4bk9" [4dbdf9d6-d8d5-4310-88f7-d1e64748af7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0111 07:24:24.122589    9587 system_pods.go:89] "nvidia-device-plugin-daemonset-zmrbr" [159b4a1d-9576-4833-83cf-df9e8385d4d7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0111 07:24:24.122602    9587 system_pods.go:89] "registry-788cd7d5bc-kpth5" [0bfa7dde-fb3b-431e-b149-2cd772930827] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0111 07:24:24.122611    9587 system_pods.go:89] "registry-creds-567fb78d95-zq5dk" [08286141-05a5-426c-8156-61dfe02f5902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0111 07:24:24.122622    9587 system_pods.go:89] "registry-proxy-npjfm" [66d3b5fb-2e47-407f-ad35-f70e2aa4041d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0111 07:24:24.122647    9587 system_pods.go:89] "snapshot-controller-6588d87457-nkfdp" [8160ca1a-b263-4370-b050-4807daa254ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 07:24:24.122655    9587 system_pods.go:89] "snapshot-controller-6588d87457-vrlp4" [e7949d9f-29fc-4e0f-9fe3-564b17504c08] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 07:24:24.122669    9587 system_pods.go:89] "storage-provisioner" [e46d98de-d2ed-4988-9e35-faaec4eccc3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:24:24.379268    9587 system_pods.go:86] 20 kube-system pods found
	I0111 07:24:24.379308    9587 system_pods.go:89] "amd-gpu-device-plugin-xz9v9" [cf7f647a-3dcb-409b-9b1e-6772fb528f34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0111 07:24:24.379316    9587 system_pods.go:89] "coredns-7d764666f9-c5mz2" [a0e15b1c-d415-44c7-854b-9b1e47f40987] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:24:24.379323    9587 system_pods.go:89] "csi-hostpath-attacher-0" [805ed0db-3375-453c-99cc-ad33b90caee7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0111 07:24:24.379331    9587 system_pods.go:89] "csi-hostpath-resizer-0" [98c22a47-d299-4f34-85ea-2c01d002fa87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0111 07:24:24.379340    9587 system_pods.go:89] "csi-hostpathplugin-pqz6c" [42e82e1d-ea95-4aca-a92e-59f4403eea1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0111 07:24:24.379351    9587 system_pods.go:89] "etcd-addons-171536" [81dad9bf-4422-4bb5-8bc4-d0de7b4c279b] Running
	I0111 07:24:24.379358    9587 system_pods.go:89] "kindnet-8rpxw" [e121b2e7-25ee-4ca0-8a6b-bab4d1fc82b9] Running
	I0111 07:24:24.379367    9587 system_pods.go:89] "kube-apiserver-addons-171536" [733e888d-efd7-4563-8d2a-202ac3dec5db] Running
	I0111 07:24:24.379374    9587 system_pods.go:89] "kube-controller-manager-addons-171536" [b01f293a-7e2e-4927-b2c9-e8ea38b2b496] Running
	I0111 07:24:24.379386    9587 system_pods.go:89] "kube-ingress-dns-minikube" [9c7ffc99-1dd3-4247-8fbb-db3eb0b271a2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0111 07:24:24.379397    9587 system_pods.go:89] "kube-proxy-nx2qs" [fccffe39-d254-440b-ab80-1b4f89b82427] Running
	I0111 07:24:24.379403    9587 system_pods.go:89] "kube-scheduler-addons-171536" [dbfa82d9-a7e1-461d-8269-e9e902d43e34] Running
	I0111 07:24:24.379414    9587 system_pods.go:89] "metrics-server-5778bb4788-v4bk9" [4dbdf9d6-d8d5-4310-88f7-d1e64748af7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0111 07:24:24.379427    9587 system_pods.go:89] "nvidia-device-plugin-daemonset-zmrbr" [159b4a1d-9576-4833-83cf-df9e8385d4d7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0111 07:24:24.379441    9587 system_pods.go:89] "registry-788cd7d5bc-kpth5" [0bfa7dde-fb3b-431e-b149-2cd772930827] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0111 07:24:24.379450    9587 system_pods.go:89] "registry-creds-567fb78d95-zq5dk" [08286141-05a5-426c-8156-61dfe02f5902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0111 07:24:24.379463    9587 system_pods.go:89] "registry-proxy-npjfm" [66d3b5fb-2e47-407f-ad35-f70e2aa4041d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0111 07:24:24.379475    9587 system_pods.go:89] "snapshot-controller-6588d87457-nkfdp" [8160ca1a-b263-4370-b050-4807daa254ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 07:24:24.379483    9587 system_pods.go:89] "snapshot-controller-6588d87457-vrlp4" [e7949d9f-29fc-4e0f-9fe3-564b17504c08] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 07:24:24.379494    9587 system_pods.go:89] "storage-provisioner" [e46d98de-d2ed-4988-9e35-faaec4eccc3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:24:24.416300    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:24.416909    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:24.417091    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:24.495336    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:24.706121    9587 system_pods.go:86] 20 kube-system pods found
	I0111 07:24:24.706160    9587 system_pods.go:89] "amd-gpu-device-plugin-xz9v9" [cf7f647a-3dcb-409b-9b1e-6772fb528f34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0111 07:24:24.706171    9587 system_pods.go:89] "coredns-7d764666f9-c5mz2" [a0e15b1c-d415-44c7-854b-9b1e47f40987] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:24:24.706184    9587 system_pods.go:89] "csi-hostpath-attacher-0" [805ed0db-3375-453c-99cc-ad33b90caee7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0111 07:24:24.706194    9587 system_pods.go:89] "csi-hostpath-resizer-0" [98c22a47-d299-4f34-85ea-2c01d002fa87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0111 07:24:24.706203    9587 system_pods.go:89] "csi-hostpathplugin-pqz6c" [42e82e1d-ea95-4aca-a92e-59f4403eea1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0111 07:24:24.706209    9587 system_pods.go:89] "etcd-addons-171536" [81dad9bf-4422-4bb5-8bc4-d0de7b4c279b] Running
	I0111 07:24:24.706215    9587 system_pods.go:89] "kindnet-8rpxw" [e121b2e7-25ee-4ca0-8a6b-bab4d1fc82b9] Running
	I0111 07:24:24.706228    9587 system_pods.go:89] "kube-apiserver-addons-171536" [733e888d-efd7-4563-8d2a-202ac3dec5db] Running
	I0111 07:24:24.706234    9587 system_pods.go:89] "kube-controller-manager-addons-171536" [b01f293a-7e2e-4927-b2c9-e8ea38b2b496] Running
	I0111 07:24:24.706244    9587 system_pods.go:89] "kube-ingress-dns-minikube" [9c7ffc99-1dd3-4247-8fbb-db3eb0b271a2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0111 07:24:24.706251    9587 system_pods.go:89] "kube-proxy-nx2qs" [fccffe39-d254-440b-ab80-1b4f89b82427] Running
	I0111 07:24:24.706257    9587 system_pods.go:89] "kube-scheduler-addons-171536" [dbfa82d9-a7e1-461d-8269-e9e902d43e34] Running
	I0111 07:24:24.706277    9587 system_pods.go:89] "metrics-server-5778bb4788-v4bk9" [4dbdf9d6-d8d5-4310-88f7-d1e64748af7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0111 07:24:24.706294    9587 system_pods.go:89] "nvidia-device-plugin-daemonset-zmrbr" [159b4a1d-9576-4833-83cf-df9e8385d4d7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0111 07:24:24.706308    9587 system_pods.go:89] "registry-788cd7d5bc-kpth5" [0bfa7dde-fb3b-431e-b149-2cd772930827] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0111 07:24:24.706321    9587 system_pods.go:89] "registry-creds-567fb78d95-zq5dk" [08286141-05a5-426c-8156-61dfe02f5902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0111 07:24:24.706336    9587 system_pods.go:89] "registry-proxy-npjfm" [66d3b5fb-2e47-407f-ad35-f70e2aa4041d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0111 07:24:24.706349    9587 system_pods.go:89] "snapshot-controller-6588d87457-nkfdp" [8160ca1a-b263-4370-b050-4807daa254ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 07:24:24.706364    9587 system_pods.go:89] "snapshot-controller-6588d87457-vrlp4" [e7949d9f-29fc-4e0f-9fe3-564b17504c08] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 07:24:24.706371    9587 system_pods.go:89] "storage-provisioner" [e46d98de-d2ed-4988-9e35-faaec4eccc3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:24:24.916910    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:24.917037    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:24.918100    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:24.992261    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:25.211066    9587 system_pods.go:86] 20 kube-system pods found
	I0111 07:24:25.211105    9587 system_pods.go:89] "amd-gpu-device-plugin-xz9v9" [cf7f647a-3dcb-409b-9b1e-6772fb528f34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0111 07:24:25.211116    9587 system_pods.go:89] "coredns-7d764666f9-c5mz2" [a0e15b1c-d415-44c7-854b-9b1e47f40987] Running
	I0111 07:24:25.211127    9587 system_pods.go:89] "csi-hostpath-attacher-0" [805ed0db-3375-453c-99cc-ad33b90caee7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0111 07:24:25.211136    9587 system_pods.go:89] "csi-hostpath-resizer-0" [98c22a47-d299-4f34-85ea-2c01d002fa87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0111 07:24:25.211145    9587 system_pods.go:89] "csi-hostpathplugin-pqz6c" [42e82e1d-ea95-4aca-a92e-59f4403eea1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0111 07:24:25.211159    9587 system_pods.go:89] "etcd-addons-171536" [81dad9bf-4422-4bb5-8bc4-d0de7b4c279b] Running
	I0111 07:24:25.211183    9587 system_pods.go:89] "kindnet-8rpxw" [e121b2e7-25ee-4ca0-8a6b-bab4d1fc82b9] Running
	I0111 07:24:25.211189    9587 system_pods.go:89] "kube-apiserver-addons-171536" [733e888d-efd7-4563-8d2a-202ac3dec5db] Running
	I0111 07:24:25.211202    9587 system_pods.go:89] "kube-controller-manager-addons-171536" [b01f293a-7e2e-4927-b2c9-e8ea38b2b496] Running
	I0111 07:24:25.211217    9587 system_pods.go:89] "kube-ingress-dns-minikube" [9c7ffc99-1dd3-4247-8fbb-db3eb0b271a2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0111 07:24:25.211222    9587 system_pods.go:89] "kube-proxy-nx2qs" [fccffe39-d254-440b-ab80-1b4f89b82427] Running
	I0111 07:24:25.211231    9587 system_pods.go:89] "kube-scheduler-addons-171536" [dbfa82d9-a7e1-461d-8269-e9e902d43e34] Running
	I0111 07:24:25.211239    9587 system_pods.go:89] "metrics-server-5778bb4788-v4bk9" [4dbdf9d6-d8d5-4310-88f7-d1e64748af7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0111 07:24:25.211252    9587 system_pods.go:89] "nvidia-device-plugin-daemonset-zmrbr" [159b4a1d-9576-4833-83cf-df9e8385d4d7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0111 07:24:25.211261    9587 system_pods.go:89] "registry-788cd7d5bc-kpth5" [0bfa7dde-fb3b-431e-b149-2cd772930827] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0111 07:24:25.211273    9587 system_pods.go:89] "registry-creds-567fb78d95-zq5dk" [08286141-05a5-426c-8156-61dfe02f5902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0111 07:24:25.211281    9587 system_pods.go:89] "registry-proxy-npjfm" [66d3b5fb-2e47-407f-ad35-f70e2aa4041d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0111 07:24:25.211289    9587 system_pods.go:89] "snapshot-controller-6588d87457-nkfdp" [8160ca1a-b263-4370-b050-4807daa254ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 07:24:25.211315    9587 system_pods.go:89] "snapshot-controller-6588d87457-vrlp4" [e7949d9f-29fc-4e0f-9fe3-564b17504c08] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 07:24:25.211324    9587 system_pods.go:89] "storage-provisioner" [e46d98de-d2ed-4988-9e35-faaec4eccc3f] Running
	I0111 07:24:25.211334    9587 system_pods.go:126] duration metric: took 1.321258998s to wait for k8s-apps to be running ...
	I0111 07:24:25.211343    9587 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:24:25.211396    9587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:24:25.229341    9587 system_svc.go:56] duration metric: took 17.989716ms WaitForService to wait for kubelet
	I0111 07:24:25.229374    9587 kubeadm.go:587] duration metric: took 14.590585581s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:24:25.229399    9587 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:24:25.232857    9587 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:24:25.232898    9587 node_conditions.go:123] node cpu capacity is 8
	I0111 07:24:25.232917    9587 node_conditions.go:105] duration metric: took 3.512847ms to run NodePressure ...
	I0111 07:24:25.232932    9587 start.go:242] waiting for startup goroutines ...
	I0111 07:24:25.415740    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:25.415966    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:25.416494    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:25.492652    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:25.915214    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:25.915547    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:25.916624    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:25.992382    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:26.416433    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:26.416471    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:26.417216    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:26.491807    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:26.915886    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:26.916089    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:26.916094    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:26.991328    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:27.415929    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:27.416055    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:27.417293    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:27.491770    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:27.915510    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:27.916343    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:27.917527    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:27.992213    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:28.424639    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:28.424889    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:28.424897    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:28.491404    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:28.918058    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:28.918460    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:28.918693    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:28.992695    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:29.416186    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:29.416441    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:29.417186    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:29.491951    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:29.916094    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:29.916160    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:29.916286    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:29.992604    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:30.416793    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:30.416914    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:30.417122    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:30.492196    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:30.915604    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:30.915714    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:30.916309    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:30.992133    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:31.416184    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:31.416189    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:31.416640    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:31.492576    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:31.915908    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:31.916003    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:31.916026    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:32.015854    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:32.415544    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:32.415629    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:32.416382    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:32.491960    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:32.915139    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:32.915497    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:32.916620    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:32.993624    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:33.417099    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:33.417933    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:33.417998    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:33.491599    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:33.915824    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:33.916152    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:33.916967    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:33.991556    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:34.416773    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:34.416891    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:34.416905    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:34.492855    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:34.915996    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:34.916035    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:34.916291    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:34.991430    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:35.416588    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:35.416748    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:35.417292    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:35.492034    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:35.916516    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:35.916532    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:35.916710    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:35.992354    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:36.459018    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:36.459088    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:36.459116    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:36.491158    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:36.916154    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:36.916527    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:36.916826    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:36.992238    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:37.416139    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:37.416363    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:37.417159    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:37.492082    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:37.915682    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:37.916020    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:37.916670    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:38.016235    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:38.416065    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:38.416154    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:38.417057    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:38.492530    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:38.917364    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:38.917660    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:38.917780    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:38.998774    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:39.415784    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:39.415853    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:39.417731    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:39.516199    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:39.915548    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:39.915772    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:39.916914    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:39.991245    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:40.415887    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:40.416025    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:40.416703    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:40.491182    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:40.915554    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:40.915721    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:40.916795    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:40.991984    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:41.415841    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:41.415916    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:41.416323    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:41.491850    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:41.915681    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:41.915702    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:41.916295    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:41.992242    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:42.416111    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:42.416216    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:42.416734    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:42.491685    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:42.916966    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:42.917106    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:42.917335    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:42.991267    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:43.415652    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:43.415896    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:43.416982    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:43.515877    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:43.916292    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:43.916373    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:43.916901    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:43.991668    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:44.416430    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:44.416492    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:44.416505    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:44.517112    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:44.915415    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:44.915842    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:44.916874    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:44.991763    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:45.414771    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:45.415265    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:45.416199    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:45.491993    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:45.915171    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:45.915285    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:45.916424    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:45.991902    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:46.483158    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:46.571308    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:46.571361    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:46.571529    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:46.916654    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:46.916821    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:46.916828    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:46.992422    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:47.415831    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:47.416027    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:47.417080    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:47.492523    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:47.916623    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 07:24:47.916752    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:47.916793    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:47.992332    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:48.416095    9587 kapi.go:107] duration metric: took 36.003295827s to wait for kubernetes.io/minikube-addons=registry ...
	I0111 07:24:48.416277    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:48.416897    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:48.491882    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:48.915110    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:48.916596    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:48.992041    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:49.415458    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:49.416857    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:49.515360    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:49.917489    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:49.917576    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:49.992903    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:50.415966    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:50.417105    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:50.492293    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:50.916275    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:50.916290    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:50.992041    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:51.415431    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:51.416949    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:51.491677    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:51.915771    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:51.916191    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:51.992127    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:52.415947    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:52.417921    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:52.491464    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:52.915984    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:52.917337    9587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 07:24:53.016525    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:53.416479    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:53.417827    9587 kapi.go:107] duration metric: took 41.003874009s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0111 07:24:53.490954    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:53.915855    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:53.994165    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:54.472688    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:54.493587    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:54.916092    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:54.991485    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 07:24:55.417215    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:55.491770    9587 kapi.go:107] duration metric: took 36.503209254s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0111 07:24:55.493795    9587 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-171536 cluster.
	I0111 07:24:55.495139    9587 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0111 07:24:55.496540    9587 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0111 07:24:55.915201    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:56.415575    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:56.915919    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:57.415567    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:57.915979    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:58.415747    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:58.915293    9587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 07:24:59.416047    9587 kapi.go:107] duration metric: took 46.504099201s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0111 07:24:59.417747    9587 out.go:179] * Enabled addons: ingress-dns, registry-creds, inspektor-gadget, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0111 07:24:59.418874    9587 addons.go:530] duration metric: took 48.780059457s for enable addons: enabled=[ingress-dns registry-creds inspektor-gadget nvidia-device-plugin amd-gpu-device-plugin storage-provisioner cloud-spanner metrics-server yakd storage-provisioner-rancher default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0111 07:24:59.418910    9587 start.go:247] waiting for cluster config update ...
	I0111 07:24:59.418927    9587 start.go:256] writing updated cluster config ...
	I0111 07:24:59.419175    9587 ssh_runner.go:195] Run: rm -f paused
	I0111 07:24:59.422924    9587 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:24:59.425654    9587 pod_ready.go:83] waiting for pod "coredns-7d764666f9-c5mz2" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:24:59.429464    9587 pod_ready.go:94] pod "coredns-7d764666f9-c5mz2" is "Ready"
	I0111 07:24:59.429481    9587 pod_ready.go:86] duration metric: took 3.81022ms for pod "coredns-7d764666f9-c5mz2" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:24:59.431205    9587 pod_ready.go:83] waiting for pod "etcd-addons-171536" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:24:59.434268    9587 pod_ready.go:94] pod "etcd-addons-171536" is "Ready"
	I0111 07:24:59.434284    9587 pod_ready.go:86] duration metric: took 3.063361ms for pod "etcd-addons-171536" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:24:59.435775    9587 pod_ready.go:83] waiting for pod "kube-apiserver-addons-171536" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:24:59.438857    9587 pod_ready.go:94] pod "kube-apiserver-addons-171536" is "Ready"
	I0111 07:24:59.438877    9587 pod_ready.go:86] duration metric: took 3.083969ms for pod "kube-apiserver-addons-171536" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:24:59.440412    9587 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-171536" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:24:59.826123    9587 pod_ready.go:94] pod "kube-controller-manager-addons-171536" is "Ready"
	I0111 07:24:59.826148    9587 pod_ready.go:86] duration metric: took 385.71873ms for pod "kube-controller-manager-addons-171536" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:25:00.027078    9587 pod_ready.go:83] waiting for pod "kube-proxy-nx2qs" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:25:00.426740    9587 pod_ready.go:94] pod "kube-proxy-nx2qs" is "Ready"
	I0111 07:25:00.426768    9587 pod_ready.go:86] duration metric: took 399.66158ms for pod "kube-proxy-nx2qs" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:25:00.626912    9587 pod_ready.go:83] waiting for pod "kube-scheduler-addons-171536" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:25:01.026684    9587 pod_ready.go:94] pod "kube-scheduler-addons-171536" is "Ready"
	I0111 07:25:01.026716    9587 pod_ready.go:86] duration metric: took 399.779706ms for pod "kube-scheduler-addons-171536" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:25:01.026730    9587 pod_ready.go:40] duration metric: took 1.603781591s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:25:01.069018    9587 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0111 07:25:01.070840    9587 out.go:179] * Done! kubectl is now configured to use "addons-171536" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 07:25:03 addons-171536 crio[768]: time="2026-01-11T07:25:03.152777869Z" level=info msg="Starting container: c166e03c7107a6dab06f8e409c1edef04f2b82c65559b669caa44cc678b6cc30" id=da31347f-e5b8-467c-ae99-71bf4eb7466b name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:25:03 addons-171536 crio[768]: time="2026-01-11T07:25:03.154691423Z" level=info msg="Started container" PID=6431 containerID=c166e03c7107a6dab06f8e409c1edef04f2b82c65559b669caa44cc678b6cc30 description=default/busybox/busybox id=da31347f-e5b8-467c-ae99-71bf4eb7466b name=/runtime.v1.RuntimeService/StartContainer sandboxID=104069e1891a504f4937aff115d60edaa71c838a919892414fa04a68504b5c42
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.711562668Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19/POD" id=c659d4bb-6fda-4055-89f3-9a5d3c419e1d name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.711643438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.718715158Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19 Namespace:local-path-storage ID:cf940bbeaa86650c769500492833755fc8c8fd2ba647f9b94e9cd54627c015fb UID:7c227e99-05c5-4823-9c4f-ff6546926237 NetNS:/var/run/netns/10dae36e-41a9-49f6-8649-5af31a52f538 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000b7e778}] Aliases:map[]}"
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.718743331Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19 to CNI network \"kindnet\" (type=ptp)"
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.736405171Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19 Namespace:local-path-storage ID:cf940bbeaa86650c769500492833755fc8c8fd2ba647f9b94e9cd54627c015fb UID:7c227e99-05c5-4823-9c4f-ff6546926237 NetNS:/var/run/netns/10dae36e-41a9-49f6-8649-5af31a52f538 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000b7e778}] Aliases:map[]}"
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.736526007Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19 for CNI network kindnet (type=ptp)"
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.737470041Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.738216503Z" level=info msg="Ran pod sandbox cf940bbeaa86650c769500492833755fc8c8fd2ba647f9b94e9cd54627c015fb with infra container: local-path-storage/helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19/POD" id=c659d4bb-6fda-4055-89f3-9a5d3c419e1d name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.739474912Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=79315f17-6d35-4513-8761-ad15cca58528 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.739633068Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=79315f17-6d35-4513-8761-ad15cca58528 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.739704394Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=79315f17-6d35-4513-8761-ad15cca58528 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.743912127Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=369c080d-130e-4d1a-9a75-3e136841f197 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:25:09 addons-171536 crio[768]: time="2026-01-11T07:25:09.744280857Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Jan 11 07:25:10 addons-171536 crio[768]: time="2026-01-11T07:25:10.277866855Z" level=info msg="Pulled image: docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee" id=369c080d-130e-4d1a-9a75-3e136841f197 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:25:10 addons-171536 crio[768]: time="2026-01-11T07:25:10.27846034Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=cf7a4529-44dd-4bd9-bb10-be11e9a25abf name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:25:10 addons-171536 crio[768]: time="2026-01-11T07:25:10.280376574Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=e9b4b43a-7bbc-4878-b9ce-dd9e3f507e75 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:25:10 addons-171536 crio[768]: time="2026-01-11T07:25:10.284489934Z" level=info msg="Creating container: local-path-storage/helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19/helper-pod" id=357e87a8-d6c2-4692-85c5-fa3be4d4f12a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:25:10 addons-171536 crio[768]: time="2026-01-11T07:25:10.284634755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:25:10 addons-171536 crio[768]: time="2026-01-11T07:25:10.290907923Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:25:10 addons-171536 crio[768]: time="2026-01-11T07:25:10.291413943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:25:10 addons-171536 crio[768]: time="2026-01-11T07:25:10.324643714Z" level=info msg="Created container 57f98cc12e47d2f943f028aa50f59dd88ca13733e2b1853be99c9b7dff76c141: local-path-storage/helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19/helper-pod" id=357e87a8-d6c2-4692-85c5-fa3be4d4f12a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:25:10 addons-171536 crio[768]: time="2026-01-11T07:25:10.325255217Z" level=info msg="Starting container: 57f98cc12e47d2f943f028aa50f59dd88ca13733e2b1853be99c9b7dff76c141" id=9e267f61-616b-4a9f-be75-b328de14ebe9 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:25:10 addons-171536 crio[768]: time="2026-01-11T07:25:10.327073637Z" level=info msg="Started container" PID=6728 containerID=57f98cc12e47d2f943f028aa50f59dd88ca13733e2b1853be99c9b7dff76c141 description=local-path-storage/helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19/helper-pod id=9e267f61-616b-4a9f-be75-b328de14ebe9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cf940bbeaa86650c769500492833755fc8c8fd2ba647f9b94e9cd54627c015fb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	57f98cc12e47d       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            Less than a second ago   Exited              helper-pod                               0                   cf940bbeaa866       helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19   local-path-storage
	c166e03c7107a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago            Running             busybox                                  0                   104069e1891a5       busybox                                                      default
	dec2189f99757       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          11 seconds ago           Running             csi-snapshotter                          0                   7a586eac8e09c       csi-hostpathplugin-pqz6c                                     kube-system
	020fa17cec32b       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          12 seconds ago           Running             csi-provisioner                          0                   7a586eac8e09c       csi-hostpathplugin-pqz6c                                     kube-system
	4e96c6658214a       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            13 seconds ago           Running             liveness-probe                           0                   7a586eac8e09c       csi-hostpathplugin-pqz6c                                     kube-system
	61261d2742245       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           14 seconds ago           Running             hostpath                                 0                   7a586eac8e09c       csi-hostpathplugin-pqz6c                                     kube-system
	e82c2d3219128       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                14 seconds ago           Running             node-driver-registrar                    0                   7a586eac8e09c       csi-hostpathplugin-pqz6c                                     kube-system
	6dc54bbabe509       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 15 seconds ago           Running             gcp-auth                                 0                   86e7ef9d8fe10       gcp-auth-5bbcf684b5-kzglr                                    gcp-auth
	2a183e86537ca       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             17 seconds ago           Running             controller                               0                   6409f9542a1bd       ingress-nginx-controller-7847b5c79c-slxwd                    ingress-nginx
	ae12b210b9c84       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ec62224230f19a2cf1dfa480d6b31c048eaa365d192ceb554d2de6304e938d8c                            21 seconds ago           Running             gadget                                   0                   39faf4c02b10a       gadget-dc8tf                                                 gadget
	0073925a1c9e9       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              23 seconds ago           Running             registry-proxy                           0                   f823c21b6775a       registry-proxy-npjfm                                         kube-system
	e537a2e8b2bf9       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     25 seconds ago           Running             amd-gpu-device-plugin                    0                   0d06c385cdd64       amd-gpu-device-plugin-xz9v9                                  kube-system
	38477b1d03e50       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   26 seconds ago           Running             csi-external-health-monitor-controller   0                   7a586eac8e09c       csi-hostpathplugin-pqz6c                                     kube-system
	1a9d22dc404fa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   26 seconds ago           Exited              patch                                    0                   605932888e8af       ingress-nginx-admission-patch-v7crq                          ingress-nginx
	4f25c64762e22       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             27 seconds ago           Running             csi-attacher                             0                   9eae4bc0be16f       csi-hostpath-attacher-0                                      kube-system
	378d489b0bf34       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     27 seconds ago           Running             nvidia-device-plugin-ctr                 0                   e5b5d6ed63c6f       nvidia-device-plugin-daemonset-zmrbr                         kube-system
	04ad42a5d6445       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   30 seconds ago           Exited              patch                                    0                   fc6e6b6e961d7       gcp-auth-certs-patch-wt6fd                                   gcp-auth
	f792cdae76b0d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   31 seconds ago           Exited              create                                   0                   ced0d9783dbd1       gcp-auth-certs-create-khg8z                                  gcp-auth
	d149b51046769       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             31 seconds ago           Running             local-path-provisioner                   0                   a25c8c1fc57ef       local-path-provisioner-c44bcd496-fs6xs                       local-path-storage
	07590d54bd878       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   31 seconds ago           Exited              create                                   0                   6cee0dbd7b500       ingress-nginx-admission-create-65fww                         ingress-nginx
	3ff9219f13f87       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      32 seconds ago           Running             volume-snapshot-controller               0                   aa6f5012742d6       snapshot-controller-6588d87457-vrlp4                         kube-system
	c8cfde75062d1       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      32 seconds ago           Running             volume-snapshot-controller               0                   08a4e5ff97314       snapshot-controller-6588d87457-nkfdp                         kube-system
	412a3cc9d76f6       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              33 seconds ago           Running             csi-resizer                              0                   1c86337d69a28       csi-hostpath-resizer-0                                       kube-system
	f453e17ecff34       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           34 seconds ago           Running             registry                                 0                   ecce9a36890aa       registry-788cd7d5bc-kpth5                                    kube-system
	8e16bb8b1dc9f       gcr.io/cloud-spanner-emulator/emulator@sha256:b948b04b45496ebeb13eee27bc9d238593c142e8e010443892153f181591abde                               35 seconds ago           Running             cloud-spanner-emulator                   0                   cbc6a726327a3       cloud-spanner-emulator-5649ccbc87-4k2p2                      default
	065656a56ac08       ghcr.io/manusa/yakd@sha256:45d2fe163841511e351ae36a5e434fb854a886b0d6a70cea692bd707543fd8c6                                                  38 seconds ago           Running             yakd                                     0                   ef454064adfd8       yakd-dashboard-7bcf5795cd-twqvk                              yakd-dashboard
	c30a8b92335dd       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        40 seconds ago           Running             metrics-server                           0                   b4d133ae57aa5       metrics-server-5778bb4788-v4bk9                              kube-system
	d5450d34ec89c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               41 seconds ago           Running             minikube-ingress-dns                     0                   f7891e6d81403       kube-ingress-dns-minikube                                    kube-system
	bb9ae55f945e1       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                                             46 seconds ago           Running             coredns                                  0                   f22dc0b1d1638       coredns-7d764666f9-c5mz2                                     kube-system
	786a8dbc1c197       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             46 seconds ago           Running             storage-provisioner                      0                   29e1e3705bb05       storage-provisioner                                          kube-system
	47e635841991a       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           57 seconds ago           Running             kindnet-cni                              0                   5b787fe292d20       kindnet-8rpxw                                                kube-system
	3fee420a7842a       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                                                             59 seconds ago           Running             kube-proxy                               0                   fd3c8c847f630       kube-proxy-nx2qs                                             kube-system
	f61be6c7d6abd       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                                                             About a minute ago       Running             kube-apiserver                           0                   5d3825421de4e       kube-apiserver-addons-171536                                 kube-system
	bb18076e67f3c       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                                                             About a minute ago       Running             kube-controller-manager                  0                   0b42f6737d1db       kube-controller-manager-addons-171536                        kube-system
	3ffa2d6de25c6       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                                                             About a minute ago       Running             kube-scheduler                           0                   1a90154f40c83       kube-scheduler-addons-171536                                 kube-system
	883f9697f51fc       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                                             About a minute ago       Running             etcd                                     0                   aae48e1156378       etcd-addons-171536                                           kube-system
	
	
	==> coredns [bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc] <==
	[INFO] 10.244.0.14:56818 - 15042 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007539s
	[INFO] 10.244.0.14:49213 - 44743 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098907s
	[INFO] 10.244.0.14:49213 - 44452 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000120225s
	[INFO] 10.244.0.14:50970 - 14641 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000058723s
	[INFO] 10.244.0.14:50970 - 14408 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000088793s
	[INFO] 10.244.0.14:34480 - 3021 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000077982s
	[INFO] 10.244.0.14:34480 - 2768 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00012269s
	[INFO] 10.244.0.14:46829 - 13280 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000081381s
	[INFO] 10.244.0.14:46829 - 13535 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000114121s
	[INFO] 10.244.0.14:38206 - 23147 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132389s
	[INFO] 10.244.0.14:38206 - 22951 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173621s
	[INFO] 10.244.0.21:38461 - 17584 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195729s
	[INFO] 10.244.0.21:49565 - 12827 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264511s
	[INFO] 10.244.0.21:40389 - 49056 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127903s
	[INFO] 10.244.0.21:51460 - 59354 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000162261s
	[INFO] 10.244.0.21:43730 - 51260 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121437s
	[INFO] 10.244.0.21:58410 - 55417 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137496s
	[INFO] 10.244.0.21:42589 - 36751 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005252617s
	[INFO] 10.244.0.21:58079 - 14078 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007194316s
	[INFO] 10.244.0.21:40442 - 39498 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005145572s
	[INFO] 10.244.0.21:41194 - 65341 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006040098s
	[INFO] 10.244.0.21:57611 - 31040 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004781002s
	[INFO] 10.244.0.21:50985 - 3856 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005183512s
	[INFO] 10.244.0.21:35318 - 6787 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001220507s
	[INFO] 10.244.0.21:54862 - 16705 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001337264s
	
	
	==> describe nodes <==
	Name:               addons-171536
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-171536
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=addons-171536
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_24_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-171536
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-171536"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:24:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-171536
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:25:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:25:06 +0000   Sun, 11 Jan 2026 07:24:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:25:06 +0000   Sun, 11 Jan 2026 07:24:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:25:06 +0000   Sun, 11 Jan 2026 07:24:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:25:06 +0000   Sun, 11 Jan 2026 07:24:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-171536
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                422a8aa5-3a89-4db9-be2a-f71c89313873
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-5649ccbc87-4k2p2                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  gadget                      gadget-dc8tf                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  gcp-auth                    gcp-auth-5bbcf684b5-kzglr                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-slxwd                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         58s
	  kube-system                 amd-gpu-device-plugin-xz9v9                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 coredns-7d764666f9-c5mz2                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     60s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 csi-hostpathplugin-pqz6c                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 etcd-addons-171536                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         65s
	  kube-system                 kindnet-8rpxw                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      60s
	  kube-system                 kube-apiserver-addons-171536                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-controller-manager-addons-171536                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-nx2qs                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-scheduler-addons-171536                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 metrics-server-5778bb4788-v4bk9                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         58s
	  kube-system                 nvidia-device-plugin-daemonset-zmrbr                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 registry-788cd7d5bc-kpth5                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 registry-creds-567fb78d95-zq5dk                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 registry-proxy-npjfm                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 snapshot-controller-6588d87457-nkfdp                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 snapshot-controller-6588d87457-vrlp4                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  local-path-storage          helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-c44bcd496-fs6xs                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  yakd-dashboard              yakd-dashboard-7bcf5795cd-twqvk                               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  61s   node-controller  Node addons-171536 event: Registered Node addons-171536 in Controller
	
	
	==> dmesg <==
	[Jan11 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001873] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.377362] i8042: Warning: Keylock active
	[  +0.013119] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.487024] block sda: the capability attribute has been deprecated.
	[  +0.081163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022550] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.654171] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898] <==
	{"level":"info","ts":"2026-01-11T07:24:01.667669Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2026-01-11T07:24:01.667679Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2026-01-11T07:24:01.668357Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-171536 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:24:01.668360Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:24:01.668402Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:24:01.668445Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:24:01.668686Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:24:01.668712Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:24:01.669054Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:24:01.669149Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:24:01.669192Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:24:01.669254Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T07:24:01.669417Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T07:24:01.669587Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:24:01.669671Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:24:01.671747Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2026-01-11T07:24:01.671908Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:24:46.311779Z","caller":"traceutil/trace.go:172","msg":"trace[1913992896] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"117.145905ms","start":"2026-01-11T07:24:46.194608Z","end":"2026-01-11T07:24:46.311754Z","steps":["trace[1913992896] 'process raft request'  (duration: 117.036324ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-11T07:24:46.410267Z","caller":"traceutil/trace.go:172","msg":"trace[1075905228] transaction","detail":"{read_only:false; response_revision:1134; number_of_response:1; }","duration":"184.242308ms","start":"2026-01-11T07:24:46.226008Z","end":"2026-01-11T07:24:46.410251Z","steps":["trace[1075905228] 'process raft request'  (duration: 183.759456ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-11T07:24:46.569747Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.738671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2026-01-11T07:24:46.569781Z","caller":"traceutil/trace.go:172","msg":"trace[2114375992] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"154.388509ms","start":"2026-01-11T07:24:46.415373Z","end":"2026-01-11T07:24:46.569762Z","steps":["trace[2114375992] 'process raft request'  (duration: 98.861758ms)","trace[2114375992] 'compare'  (duration: 55.372281ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-11T07:24:46.569830Z","caller":"traceutil/trace.go:172","msg":"trace[2029736371] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"154.856395ms","start":"2026-01-11T07:24:46.414935Z","end":"2026-01-11T07:24:46.569791Z","steps":["trace[2029736371] 'agreement among raft nodes before linearized reading'  (duration: 99.253629ms)","trace[2029736371] 'range keys from in-memory index tree'  (duration: 55.456683ms)"],"step_count":2}
	{"level":"warn","ts":"2026-01-11T07:24:46.569734Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.452715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2026-01-11T07:24:46.569936Z","caller":"traceutil/trace.go:172","msg":"trace[1411082611] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"155.697653ms","start":"2026-01-11T07:24:46.414220Z","end":"2026-01-11T07:24:46.569918Z","steps":["trace[1411082611] 'agreement among raft nodes before linearized reading'  (duration: 99.982766ms)","trace[1411082611] 'range keys from in-memory index tree'  (duration: 55.453827ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-11T07:25:06.556006Z","caller":"traceutil/trace.go:172","msg":"trace[1067140397] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"117.917178ms","start":"2026-01-11T07:25:06.438076Z","end":"2026-01-11T07:25:06.555993Z","steps":["trace[1067140397] 'process raft request'  (duration: 117.790083ms)"],"step_count":1}
	
	
	==> gcp-auth [6dc54bbabe509c77625111fe4b1dd48000c790659118d4958fa61d0d0d9dd55e] <==
	2026/01/11 07:24:55 GCP Auth Webhook started!
	2026/01/11 07:25:01 Ready to marshal response ...
	2026/01/11 07:25:01 Ready to write response ...
	2026/01/11 07:25:01 Ready to marshal response ...
	2026/01/11 07:25:01 Ready to write response ...
	2026/01/11 07:25:01 Ready to marshal response ...
	2026/01/11 07:25:01 Ready to write response ...
	2026/01/11 07:25:09 Ready to marshal response ...
	2026/01/11 07:25:09 Ready to write response ...
	2026/01/11 07:25:09 Ready to marshal response ...
	2026/01/11 07:25:09 Ready to write response ...
	
	
	==> kernel <==
	 07:25:10 up 7 min,  0 user,  load average: 1.30, 0.56, 0.21
	Linux addons-171536 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65] <==
	I0111 07:24:13.099519       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:24:13.099869       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0111 07:24:13.100043       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:24:13.100069       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:24:13.100101       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:24:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:24:13.398913       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:24:13.398960       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:24:13.399047       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:24:13.399328       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:24:13.699740       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:24:13.699764       1 metrics.go:72] Registering metrics
	I0111 07:24:13.699840       1 controller.go:711] "Syncing nftables rules"
	I0111 07:24:23.301930       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 07:24:23.302028       1 main.go:301] handling current node
	I0111 07:24:33.302686       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 07:24:33.302738       1 main.go:301] handling current node
	I0111 07:24:43.302142       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 07:24:43.302185       1 main.go:301] handling current node
	I0111 07:24:53.302963       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 07:24:53.303013       1 main.go:301] handling current node
	I0111 07:25:03.302961       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 07:25:03.303001       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0111 07:24:31.181296       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.30.133:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.30.133:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.30.133:443: connect: connection refused" logger="UnhandledError"
	W0111 07:24:32.182107       1 handler_proxy.go:99] no RequestInfo found in the context
	W0111 07:24:32.182133       1 handler_proxy.go:99] no RequestInfo found in the context
	E0111 07:24:32.182148       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0111 07:24:32.182170       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0111 07:24:32.182190       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0111 07:24:32.183310       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0111 07:24:36.187746       1 handler_proxy.go:99] no RequestInfo found in the context
	E0111 07:24:36.187816       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0111 07:24:36.187918       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.30.133:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.30.133:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I0111 07:24:36.196364       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0111 07:24:37.799358       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0111 07:24:37.805741       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0111 07:24:37.819409       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0111 07:24:37.826741       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E0111 07:25:08.746809       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56688: use of closed network connection
	E0111 07:25:08.885500       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56716: use of closed network connection
	
	
	==> kube-controller-manager [bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b] <==
	I0111 07:24:09.318641       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="addons-171536"
	I0111 07:24:09.318689       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0111 07:24:09.318770       1 shared_informer.go:377] "Caches are synced"
	I0111 07:24:09.318792       1 shared_informer.go:377] "Caches are synced"
	I0111 07:24:09.318920       1 shared_informer.go:377] "Caches are synced"
	I0111 07:24:09.318940       1 shared_informer.go:377] "Caches are synced"
	I0111 07:24:09.318988       1 range_allocator.go:177] "Sending events to api server"
	I0111 07:24:09.319036       1 shared_informer.go:377] "Caches are synced"
	I0111 07:24:09.319043       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0111 07:24:09.319153       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:24:09.319159       1 shared_informer.go:377] "Caches are synced"
	I0111 07:24:09.324198       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:24:09.329128       1 range_allocator.go:433] "Set node PodCIDR" node="addons-171536" podCIDRs=["10.244.0.0/24"]
	I0111 07:24:09.330339       1 shared_informer.go:377] "Caches are synced"
	I0111 07:24:09.419012       1 shared_informer.go:377] "Caches are synced"
	I0111 07:24:09.419028       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:24:09.419032       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:24:09.425187       1 shared_informer.go:377] "Caches are synced"
	E0111 07:24:12.051004       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I0111 07:24:24.321608       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0111 07:24:39.334520       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0111 07:24:39.334657       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:24:39.432479       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:24:39.434752       1 shared_informer.go:377] "Caches are synced"
	I0111 07:24:39.532917       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead] <==
	I0111 07:24:11.032597       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:24:11.302474       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:24:11.441837       1 shared_informer.go:377] "Caches are synced"
	I0111 07:24:11.441879       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0111 07:24:11.441977       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:24:11.998031       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:24:11.998237       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:24:12.050604       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:24:12.060164       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:24:12.060205       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:24:12.064819       1 config.go:200] "Starting service config controller"
	I0111 07:24:12.065388       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:24:12.065459       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:24:12.065485       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:24:12.065520       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:24:12.065543       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:24:12.066446       1 config.go:309] "Starting node config controller"
	I0111 07:24:12.066501       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:24:12.066530       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:24:12.169661       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:24:12.169725       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:24:12.169962       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc] <==
	E0111 07:24:02.552465       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 07:24:02.552596       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 07:24:02.552760       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 07:24:02.552845       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 07:24:02.553124       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 07:24:02.553474       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 07:24:02.553517       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 07:24:02.553562       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 07:24:02.553596       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 07:24:02.553595       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 07:24:02.553628       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 07:24:02.553648       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 07:24:02.553705       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 07:24:02.553758       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 07:24:02.554335       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 07:24:02.554338       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 07:24:03.376966       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 07:24:03.385607       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 07:24:03.396140       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 07:24:03.484539       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 07:24:03.553569       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 07:24:03.634020       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 07:24:03.648954       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0111 07:24:03.717048       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I0111 07:24:06.847020       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:24:50 addons-171536 kubelet[1266]: E0111 07:24:50.208236    1266 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-dc8tf" containerName="gadget"
	Jan 11 07:24:50 addons-171536 kubelet[1266]: I0111 07:24:50.227112    1266 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="gadget/gadget-dc8tf" podStartSLOduration=17.141885608 podStartE2EDuration="38.227093312s" podCreationTimestamp="2026-01-11 07:24:12 +0000 UTC" firstStartedPulling="2026-01-11 07:24:28.36195513 +0000 UTC m=+23.441541958" lastFinishedPulling="2026-01-11 07:24:49.447162836 +0000 UTC m=+44.526749662" observedRunningTime="2026-01-11 07:24:50.225173942 +0000 UTC m=+45.304760780" watchObservedRunningTime="2026-01-11 07:24:50.227093312 +0000 UTC m=+45.306680147"
	Jan 11 07:24:51 addons-171536 kubelet[1266]: E0111 07:24:51.210773    1266 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-dc8tf" containerName="gadget"
	Jan 11 07:24:53 addons-171536 kubelet[1266]: E0111 07:24:53.201543    1266 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-dc8tf" containerName="gadget"
	Jan 11 07:24:53 addons-171536 kubelet[1266]: E0111 07:24:53.218431    1266 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-slxwd" containerName="controller"
	Jan 11 07:24:53 addons-171536 kubelet[1266]: I0111 07:24:53.232145    1266 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-slxwd" podStartSLOduration=27.999736138 podStartE2EDuration="41.232127903s" podCreationTimestamp="2026-01-11 07:24:12 +0000 UTC" firstStartedPulling="2026-01-11 07:24:39.675427693 +0000 UTC m=+34.755014511" lastFinishedPulling="2026-01-11 07:24:52.907819448 +0000 UTC m=+47.987406276" observedRunningTime="2026-01-11 07:24:53.230924069 +0000 UTC m=+48.310510904" watchObservedRunningTime="2026-01-11 07:24:53.232127903 +0000 UTC m=+48.311714739"
	Jan 11 07:24:53 addons-171536 kubelet[1266]: E0111 07:24:53.276992    1266 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-dc8tf" containerName="gadget"
	Jan 11 07:24:54 addons-171536 kubelet[1266]: E0111 07:24:54.221361    1266 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-dc8tf" containerName="gadget"
	Jan 11 07:24:54 addons-171536 kubelet[1266]: E0111 07:24:54.221449    1266 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-slxwd" containerName="controller"
	Jan 11 07:24:55 addons-171536 kubelet[1266]: I0111 07:24:55.237738    1266 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="gcp-auth/gcp-auth-5bbcf684b5-kzglr" podStartSLOduration=21.810497587 podStartE2EDuration="37.237720916s" podCreationTimestamp="2026-01-11 07:24:18 +0000 UTC" firstStartedPulling="2026-01-11 07:24:39.710835333 +0000 UTC m=+34.790422148" lastFinishedPulling="2026-01-11 07:24:55.138058659 +0000 UTC m=+50.217645477" observedRunningTime="2026-01-11 07:24:55.236644609 +0000 UTC m=+50.316231444" watchObservedRunningTime="2026-01-11 07:24:55.237720916 +0000 UTC m=+50.317307751"
	Jan 11 07:24:55 addons-171536 kubelet[1266]: E0111 07:24:55.599792    1266 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Jan 11 07:24:55 addons-171536 kubelet[1266]: E0111 07:24:55.599898    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08286141-05a5-426c-8156-61dfe02f5902-gcr-creds podName:08286141-05a5-426c-8156-61dfe02f5902 nodeName:}" failed. No retries permitted until 2026-01-11 07:25:27.599879476 +0000 UTC m=+82.679466302 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/08286141-05a5-426c-8156-61dfe02f5902-gcr-creds") pod "registry-creds-567fb78d95-zq5dk" (UID: "08286141-05a5-426c-8156-61dfe02f5902") : secret "registry-creds-gcr" not found
	Jan 11 07:24:57 addons-171536 kubelet[1266]: I0111 07:24:57.038195    1266 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Jan 11 07:24:57 addons-171536 kubelet[1266]: I0111 07:24:57.038236    1266 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Jan 11 07:24:59 addons-171536 kubelet[1266]: E0111 07:24:59.252916    1266 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-pqz6c" containerName="hostpath"
	Jan 11 07:24:59 addons-171536 kubelet[1266]: I0111 07:24:59.266301    1266 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-pqz6c" podStartSLOduration=1.803401111 podStartE2EDuration="36.266283631s" podCreationTimestamp="2026-01-11 07:24:23 +0000 UTC" firstStartedPulling="2026-01-11 07:24:24.211696756 +0000 UTC m=+19.291283583" lastFinishedPulling="2026-01-11 07:24:58.674579276 +0000 UTC m=+53.754166103" observedRunningTime="2026-01-11 07:24:59.264883158 +0000 UTC m=+54.344470015" watchObservedRunningTime="2026-01-11 07:24:59.266283631 +0000 UTC m=+54.345870466"
	Jan 11 07:25:00 addons-171536 kubelet[1266]: E0111 07:25:00.256048    1266 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-pqz6c" containerName="hostpath"
	Jan 11 07:25:01 addons-171536 kubelet[1266]: I0111 07:25:01.744485    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rcf\" (UniqueName: \"kubernetes.io/projected/d1a88d8f-0c64-49a1-b3dc-8959539b0a56-kube-api-access-n7rcf\") pod \"busybox\" (UID: \"d1a88d8f-0c64-49a1-b3dc-8959539b0a56\") " pod="default/busybox"
	Jan 11 07:25:01 addons-171536 kubelet[1266]: I0111 07:25:01.744620    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d1a88d8f-0c64-49a1-b3dc-8959539b0a56-gcp-creds\") pod \"busybox\" (UID: \"d1a88d8f-0c64-49a1-b3dc-8959539b0a56\") " pod="default/busybox"
	Jan 11 07:25:03 addons-171536 kubelet[1266]: I0111 07:25:03.281149    1266 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.114748181 podStartE2EDuration="2.281116365s" podCreationTimestamp="2026-01-11 07:25:01 +0000 UTC" firstStartedPulling="2026-01-11 07:25:01.945833974 +0000 UTC m=+57.025420806" lastFinishedPulling="2026-01-11 07:25:03.112202155 +0000 UTC m=+58.191788990" observedRunningTime="2026-01-11 07:25:03.280556459 +0000 UTC m=+58.360143294" watchObservedRunningTime="2026-01-11 07:25:03.281116365 +0000 UTC m=+58.360703201"
	Jan 11 07:25:04 addons-171536 kubelet[1266]: E0111 07:25:04.224284    1266 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-slxwd" containerName="controller"
	Jan 11 07:25:09 addons-171536 kubelet[1266]: I0111 07:25:09.504646    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/7c227e99-05c5-4823-9c4f-ff6546926237-data\") pod \"helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19\" (UID: \"7c227e99-05c5-4823-9c4f-ff6546926237\") " pod="local-path-storage/helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19"
	Jan 11 07:25:09 addons-171536 kubelet[1266]: I0111 07:25:09.504725    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qts6f\" (UniqueName: \"kubernetes.io/projected/7c227e99-05c5-4823-9c4f-ff6546926237-kube-api-access-qts6f\") pod \"helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19\" (UID: \"7c227e99-05c5-4823-9c4f-ff6546926237\") " pod="local-path-storage/helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19"
	Jan 11 07:25:09 addons-171536 kubelet[1266]: I0111 07:25:09.504909    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7c227e99-05c5-4823-9c4f-ff6546926237-gcp-creds\") pod \"helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19\" (UID: \"7c227e99-05c5-4823-9c4f-ff6546926237\") " pod="local-path-storage/helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19"
	Jan 11 07:25:09 addons-171536 kubelet[1266]: I0111 07:25:09.504970    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/7c227e99-05c5-4823-9c4f-ff6546926237-script\") pod \"helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19\" (UID: \"7c227e99-05c5-4823-9c4f-ff6546926237\") " pod="local-path-storage/helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19"
	
	
	==> storage-provisioner [786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53] <==
	W0111 07:24:46.482609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:48.485651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:48.491557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:50.495452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:50.499581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:52.502298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:52.507932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:54.511264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:54.515377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:56.518374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:56.521772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:58.525161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:24:58.530658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:00.533440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:00.538931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:02.541679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:02.545467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:04.548275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:04.552782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:06.556844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:06.565701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:08.568444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:08.573082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:10.576109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:25:10.581457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-171536 -n addons-171536
helpers_test.go:270: (dbg) Run:  kubectl --context addons-171536 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: test-local-path gcp-auth-certs-create-khg8z gcp-auth-certs-patch-wt6fd ingress-nginx-admission-create-65fww ingress-nginx-admission-patch-v7crq registry-creds-567fb78d95-zq5dk helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-171536 describe pod test-local-path gcp-auth-certs-create-khg8z gcp-auth-certs-patch-wt6fd ingress-nginx-admission-create-65fww ingress-nginx-admission-patch-v7crq registry-creds-567fb78d95-zq5dk helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-171536 describe pod test-local-path gcp-auth-certs-create-khg8z gcp-auth-certs-patch-wt6fd ingress-nginx-admission-create-65fww ingress-nginx-admission-patch-v7crq registry-creds-567fb78d95-zq5dk helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19: exit status 1 (70.196979ms)

                                                
                                                
-- stdout --
	Name:               test-local-path
	Namespace:          default
	Priority:           0
	Service Account:    default
	Node:               <none>
	Labels:             run=test-local-path
	Annotations:        <none>
	Status:             Pending
	IP:                 
	IPs:                <none>
	NominatedNodeName:  addons-171536
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8v97 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-x8v97:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-khg8z" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-wt6fd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-65fww" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-v7crq" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-zq5dk" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-171536 describe pod test-local-path gcp-auth-certs-create-khg8z gcp-auth-certs-patch-wt6fd ingress-nginx-admission-create-65fww ingress-nginx-admission-patch-v7crq registry-creds-567fb78d95-zq5dk helper-pod-create-pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable headlamp --alsologtostderr -v=1: exit status 11 (242.162741ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:11.540131   18776 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:11.540290   18776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:11.540301   18776 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:11.540307   18776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:11.540500   18776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:11.540782   18776 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:11.541148   18776 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:11.541165   18776 addons.go:622] checking whether the cluster is paused
	I0111 07:25:11.541265   18776 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:11.541279   18776 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:11.541634   18776 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:11.559055   18776 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:11.559125   18776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:11.577925   18776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:11.678423   18776 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:11.678505   18776 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:11.707388   18776 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:11.707415   18776 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:11.707420   18776 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:11.707425   18776 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:11.707428   18776 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:11.707433   18776 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:11.707436   18776 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:11.707441   18776 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:11.707446   18776 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:11.707453   18776 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:11.707457   18776 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:11.707462   18776 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:11.707472   18776 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:11.707477   18776 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:11.707485   18776 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:11.707513   18776 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:11.707523   18776 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:11.707531   18776 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:11.707538   18776 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:11.707543   18776 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:11.707557   18776 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:11.707565   18776 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:11.707569   18776 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:11.707575   18776 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:11.707580   18776 cri.go:96] found id: ""
	I0111 07:25:11.707633   18776 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:11.721686   18776 out.go:203] 
	W0111 07:25:11.723112   18776 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:11.723143   18776 out.go:285] * 
	* 
	W0111 07:25:11.723861   18776 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:11.725197   18776 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-4k2p2" [ac4fbf5a-56d6-4889-bf38-d95d6150009c] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003936076s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (241.091012ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:16.789820   19338 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:16.790123   19338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:16.790135   19338 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:16.790139   19338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:16.790311   19338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:16.790588   19338 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:16.790946   19338 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:16.790962   19338 addons.go:622] checking whether the cluster is paused
	I0111 07:25:16.791043   19338 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:16.791054   19338 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:16.791386   19338 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:16.809253   19338 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:16.809307   19338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:16.826250   19338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:16.927436   19338 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:16.927515   19338 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:16.958094   19338 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:16.958114   19338 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:16.958117   19338 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:16.958123   19338 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:16.958126   19338 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:16.958129   19338 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:16.958132   19338 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:16.958135   19338 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:16.958138   19338 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:16.958144   19338 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:16.958147   19338 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:16.958151   19338 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:16.958154   19338 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:16.958157   19338 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:16.958160   19338 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:16.958168   19338 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:16.958171   19338 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:16.958175   19338 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:16.958178   19338 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:16.958181   19338 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:16.958184   19338 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:16.958186   19338 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:16.958189   19338 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:16.958193   19338 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:16.958195   19338 cri.go:96] found id: ""
	I0111 07:25:16.958235   19338 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:16.971735   19338 out.go:203] 
	W0111 07:25:16.972924   19338 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:16.972944   19338 out.go:285] * 
	* 
	W0111 07:25:16.973603   19338 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:16.974850   19338 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (7.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-171536 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-171536 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-171536 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [9996b405-1c90-44f0-a393-d657d8138eed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [9996b405-1c90-44f0-a393-d657d8138eed] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [9996b405-1c90-44f0-a393-d657d8138eed] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 2.003020313s
addons_test.go:969: (dbg) Run:  kubectl --context addons-171536 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 ssh "cat /opt/local-path-provisioner/pvc-8c039fff-4506-4bf1-8e4f-d24f60e4bc19_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-171536 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-171536 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (245.497807ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:16.046652   19220 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:16.046950   19220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:16.046961   19220 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:16.046966   19220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:16.047510   19220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:16.047967   19220 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:16.048739   19220 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:16.048767   19220 addons.go:622] checking whether the cluster is paused
	I0111 07:25:16.048924   19220 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:16.048940   19220 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:16.049320   19220 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:16.067998   19220 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:16.068065   19220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:16.085103   19220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:16.185287   19220 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:16.185367   19220 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:16.214929   19220 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:16.214949   19220 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:16.214953   19220 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:16.214956   19220 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:16.214959   19220 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:16.214962   19220 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:16.214965   19220 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:16.214968   19220 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:16.214970   19220 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:16.214976   19220 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:16.214979   19220 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:16.214983   19220 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:16.214986   19220 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:16.214989   19220 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:16.214992   19220 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:16.215004   19220 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:16.215007   19220 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:16.215011   19220 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:16.215018   19220 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:16.215022   19220 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:16.215025   19220 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:16.215028   19220 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:16.215030   19220 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:16.215033   19220 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:16.215036   19220 cri.go:96] found id: ""
	I0111 07:25:16.215078   19220 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:16.229856   19220 out.go:203] 
	W0111 07:25:16.231567   19220 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:16.231589   19220 out.go:285] * 
	* 
	W0111 07:25:16.232306   19220 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:16.233503   19220 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (7.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-zmrbr" [159b4a1d-9576-4833-83cf-df9e8385d4d7] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003872565s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (252.41645ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:14.192301   18979 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:14.192555   18979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:14.192565   18979 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:14.192568   18979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:14.192728   18979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:14.193048   18979 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:14.193379   18979 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:14.193393   18979 addons.go:622] checking whether the cluster is paused
	I0111 07:25:14.193475   18979 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:14.193486   18979 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:14.193858   18979 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:14.213892   18979 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:14.213937   18979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:14.233905   18979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:14.335403   18979 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:14.335472   18979 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:14.366469   18979 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:14.366492   18979 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:14.366497   18979 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:14.366503   18979 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:14.366507   18979 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:14.366512   18979 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:14.366516   18979 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:14.366520   18979 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:14.366524   18979 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:14.366531   18979 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:14.366536   18979 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:14.366540   18979 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:14.366544   18979 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:14.366549   18979 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:14.366554   18979 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:14.366580   18979 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:14.366588   18979 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:14.366595   18979 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:14.366601   18979 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:14.366609   18979 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:14.366614   18979 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:14.366623   18979 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:14.366629   18979 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:14.366636   18979 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:14.366640   18979 cri.go:96] found id: ""
	I0111 07:25:14.366689   18979 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:14.380343   18979 out.go:203] 
	W0111 07:25:14.381527   18979 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:14.381550   18979 out.go:285] * 
	* 
	W0111 07:25:14.382244   18979 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:14.383440   18979 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-twqvk" [2e0d3177-2b28-483b-95f3-ed5ff2654881] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00338427s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable yakd --alsologtostderr -v=1: exit status 11 (238.951784ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:24.755215   20480 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:24.755501   20480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:24.755511   20480 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:24.755516   20480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:24.755695   20480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:24.755972   20480 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:24.756262   20480 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:24.756276   20480 addons.go:622] checking whether the cluster is paused
	I0111 07:25:24.756350   20480 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:24.756360   20480 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:24.756714   20480 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:24.773648   20480 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:24.773709   20480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:24.791355   20480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:24.891230   20480 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:24.891333   20480 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:24.921315   20480 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:24.921334   20480 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:24.921337   20480 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:24.921340   20480 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:24.921343   20480 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:24.921346   20480 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:24.921349   20480 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:24.921351   20480 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:24.921354   20480 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:24.921365   20480 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:24.921369   20480 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:24.921372   20480 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:24.921374   20480 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:24.921377   20480 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:24.921380   20480 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:24.921386   20480 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:24.921389   20480 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:24.921393   20480 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:24.921399   20480 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:24.921401   20480 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:24.921404   20480 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:24.921409   20480 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:24.921412   20480 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:24.921415   20480 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:24.921418   20480 cri.go:96] found id: ""
	I0111 07:25:24.921454   20480 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:24.935087   20480 out.go:203] 
	W0111 07:25:24.936259   20480 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:24.936277   20480 out.go:285] * 
	* 
	W0111 07:25:24.936948   20480 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:24.938146   20480 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-xz9v9" [cf7f647a-3dcb-409b-9b1e-6772fb528f34] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003094672s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-171536 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-171536 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (240.999724ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:25:22.034449   19666 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:22.034720   19666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:22.034731   19666 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:22.034738   19666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:22.034977   19666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:25:22.035354   19666 mustload.go:66] Loading cluster: addons-171536
	I0111 07:25:22.035704   19666 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:22.035724   19666 addons.go:622] checking whether the cluster is paused
	I0111 07:25:22.035858   19666 config.go:182] Loaded profile config "addons-171536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:25:22.035875   19666 host.go:66] Checking if "addons-171536" exists ...
	I0111 07:25:22.036280   19666 cli_runner.go:164] Run: docker container inspect addons-171536 --format={{.State.Status}}
	I0111 07:25:22.054478   19666 ssh_runner.go:195] Run: systemctl --version
	I0111 07:25:22.054534   19666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-171536
	I0111 07:25:22.072399   19666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/addons-171536/id_rsa Username:docker}
	I0111 07:25:22.173209   19666 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:25:22.173292   19666 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:25:22.202787   19666 cri.go:96] found id: "dec2189f99757ab851395688c7fc4d79ec795d60f0fe442fda316b4651b5f55d"
	I0111 07:25:22.202820   19666 cri.go:96] found id: "020fa17cec32bb1bb91372abed236b039862627b1fd8fd4fde81f48ed468506a"
	I0111 07:25:22.202827   19666 cri.go:96] found id: "4e96c6658214a060ecae16a626e9deca9b428dca524c173c67acc71276d31ed8"
	I0111 07:25:22.202833   19666 cri.go:96] found id: "61261d2742245ff39d3fe14f2e1e7bf0d863678be3fe67b5c66b7add2e5dafe0"
	I0111 07:25:22.202838   19666 cri.go:96] found id: "e82c2d321912890e0c95828d601e534add5486d961339090496e0b7bf8e14e40"
	I0111 07:25:22.202843   19666 cri.go:96] found id: "0073925a1c9e9b81559aaedeae0ea376d65f0b0f86d39d521946a25c0c681ec6"
	I0111 07:25:22.202848   19666 cri.go:96] found id: "e537a2e8b2bf98a5f92bbdc7021887d792dac8e90d307f035a5b086dd94603f4"
	I0111 07:25:22.202852   19666 cri.go:96] found id: "38477b1d03e50987daaf6a39bfb28bf756ed5dc7faec7b69c6731256e2223590"
	I0111 07:25:22.202857   19666 cri.go:96] found id: "4f25c64762e22f492ff4390a2fb5b3c94b2c8b08468692efef8e56d24c717155"
	I0111 07:25:22.202867   19666 cri.go:96] found id: "378d489b0bf341cff405d4cf64f389fc60c711e7d07757a6db4ab7f7192341e6"
	I0111 07:25:22.202876   19666 cri.go:96] found id: "3ff9219f13f87d357a602e307148cc4f3f4ba6e7b581c252ebcf4276c82d4a42"
	I0111 07:25:22.202879   19666 cri.go:96] found id: "c8cfde75062d12d10e4e19ee50819af72fa78518b9ff94a977400346bedc564f"
	I0111 07:25:22.202882   19666 cri.go:96] found id: "412a3cc9d76f6e48dcc863110232a61b800f70b26ea661fa84e8a16c262f410f"
	I0111 07:25:22.202884   19666 cri.go:96] found id: "f453e17ecff346a3d083681ecb4ca0954e14c5abac83bbda4d571c8646eb5500"
	I0111 07:25:22.202887   19666 cri.go:96] found id: "c30a8b92335dd9ed1a8ed66d1318db07964121f3abe89ee920af34af1cf340a5"
	I0111 07:25:22.202909   19666 cri.go:96] found id: "d5450d34ec89c1809b1620960a5552a805b79b716d6ca586749b1118a929e70b"
	I0111 07:25:22.202920   19666 cri.go:96] found id: "bb9ae55f945e1d917ea4576711c3aa2aa501f59b2c2ba14af5414eb1a9b01ecc"
	I0111 07:25:22.202927   19666 cri.go:96] found id: "786a8dbc1c197cacefbc38263fc9b22c0f085386e7ba56db80565adc04738c53"
	I0111 07:25:22.202934   19666 cri.go:96] found id: "47e635841991a532d5fdca1c3fb0fca6b3bafa3cfd322360c14ae5982bad6e65"
	I0111 07:25:22.202939   19666 cri.go:96] found id: "3fee420a7842ae5fb9d53e5ad7e801e60256ab3660cc9f2ebfdd5fb9619d8ead"
	I0111 07:25:22.202946   19666 cri.go:96] found id: "f61be6c7d6abd9153c648bad6294ad71748a4f698c482a43c7b341d16d84f47a"
	I0111 07:25:22.202951   19666 cri.go:96] found id: "bb18076e67f3c6ae629aa4724afeaf0fc653355feab08f2a4b65110d19e4030b"
	I0111 07:25:22.202958   19666 cri.go:96] found id: "3ffa2d6de25c6ece3569c7eff61e3fd3c0d746f7c3c515043a81e0f3e19d49cc"
	I0111 07:25:22.202963   19666 cri.go:96] found id: "883f9697f51fca90435f0ac5afb738b9e4e9e090e04188a7094a910cc6458898"
	I0111 07:25:22.202967   19666 cri.go:96] found id: ""
	I0111 07:25:22.203017   19666 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:25:22.216770   19666 out.go:203] 
	W0111 07:25:22.218009   19666 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:25:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:25:22.218028   19666 out.go:285] * 
	* 
	W0111 07:25:22.218902   19666 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:25:22.220152   19666 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-171536 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.04s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-698286 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-698286 --output=json --user=testUser: exit status 80 (2.036663002s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5ae791e4-bf3e-46e6-bc38-30757c49b727","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-698286 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"b2215849-0a2e-4a57-abb9-8b5217b782df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-11T07:37:45Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"b75201ce-308b-4de9-9741-d1f4d147cd6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-698286 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.04s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.97s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-698286 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-698286 --output=json --user=testUser: exit status 80 (1.96645918s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4a63d617-1fed-47f0-8b8d-b0b86efb2f57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-698286 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"fa51522a-a49d-4a1b-9354-58ca069a17a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-11T07:37:47Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"b573698a-586d-4639-9067-2215a64f8ec1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-698286 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.97s)

                                                
                                    
x
+
TestPause/serial/Pause (6.11s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-979859 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-979859 --alsologtostderr -v=5: exit status 80 (2.327641563s)

                                                
                                                
-- stdout --
	* Pausing node pause-979859 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:49:07.381745  182834 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:49:07.381870  182834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:49:07.381887  182834 out.go:374] Setting ErrFile to fd 2...
	I0111 07:49:07.381894  182834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:49:07.382108  182834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:49:07.382405  182834 out.go:368] Setting JSON to false
	I0111 07:49:07.382427  182834 mustload.go:66] Loading cluster: pause-979859
	I0111 07:49:07.382898  182834 config.go:182] Loaded profile config "pause-979859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:49:07.383428  182834 cli_runner.go:164] Run: docker container inspect pause-979859 --format={{.State.Status}}
	I0111 07:49:07.406362  182834 host.go:66] Checking if "pause-979859" exists ...
	I0111 07:49:07.406849  182834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:49:07.471700  182834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-11 07:49:07.461887064 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:49:07.472482  182834 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-979859 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 07:49:07.491187  182834 out.go:179] * Pausing node pause-979859 ... 
	I0111 07:49:07.493115  182834 host.go:66] Checking if "pause-979859" exists ...
	I0111 07:49:07.493463  182834 ssh_runner.go:195] Run: systemctl --version
	I0111 07:49:07.493528  182834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:07.516168  182834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/pause-979859/id_rsa Username:docker}
	I0111 07:49:07.626939  182834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:49:07.643252  182834 pause.go:52] kubelet running: true
	I0111 07:49:07.643319  182834 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:49:07.805026  182834 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:49:07.805129  182834 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:49:07.882214  182834 cri.go:96] found id: "fb3d48a5461e3d903bd4f6a53010691c4575384f0fe1f9858577740063664cfc"
	I0111 07:49:07.882247  182834 cri.go:96] found id: "03c0e211978ecf5f182769c51bc9eacba756e284410fadbd73e155decf990fec"
	I0111 07:49:07.882252  182834 cri.go:96] found id: "39df3868e30b956ed7b402910c52eb3d3f645eff78410c5ced0e05e07cfb2605"
	I0111 07:49:07.882257  182834 cri.go:96] found id: "4cbe4980b4381d3cedbd6d8e5865265d8171d37f8ceeeece80e97c768b8baca8"
	I0111 07:49:07.882260  182834 cri.go:96] found id: "87587bcd281f1c0a2aef860698f2ac71a9f66c8ad72bd9d536d718035eb38bbc"
	I0111 07:49:07.882264  182834 cri.go:96] found id: "697756342f1b7fa4f1be176804f26aeac8f5640dc6a8e9956e55c932c1ec1e0c"
	I0111 07:49:07.882268  182834 cri.go:96] found id: "8cefc332f1bb940a084c44f74130ed4c2b329c7fb9503f73c7ec6a9e051e9362"
	I0111 07:49:07.882272  182834 cri.go:96] found id: ""
	I0111 07:49:07.882333  182834 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:49:07.896541  182834 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:49:07Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:49:08.113012  182834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:49:08.131340  182834 pause.go:52] kubelet running: false
	I0111 07:49:08.131404  182834 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:49:08.255475  182834 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:49:08.255559  182834 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:49:08.342029  182834 cri.go:96] found id: "fb3d48a5461e3d903bd4f6a53010691c4575384f0fe1f9858577740063664cfc"
	I0111 07:49:08.342051  182834 cri.go:96] found id: "03c0e211978ecf5f182769c51bc9eacba756e284410fadbd73e155decf990fec"
	I0111 07:49:08.342055  182834 cri.go:96] found id: "39df3868e30b956ed7b402910c52eb3d3f645eff78410c5ced0e05e07cfb2605"
	I0111 07:49:08.342059  182834 cri.go:96] found id: "4cbe4980b4381d3cedbd6d8e5865265d8171d37f8ceeeece80e97c768b8baca8"
	I0111 07:49:08.342061  182834 cri.go:96] found id: "87587bcd281f1c0a2aef860698f2ac71a9f66c8ad72bd9d536d718035eb38bbc"
	I0111 07:49:08.342064  182834 cri.go:96] found id: "697756342f1b7fa4f1be176804f26aeac8f5640dc6a8e9956e55c932c1ec1e0c"
	I0111 07:49:08.342067  182834 cri.go:96] found id: "8cefc332f1bb940a084c44f74130ed4c2b329c7fb9503f73c7ec6a9e051e9362"
	I0111 07:49:08.342069  182834 cri.go:96] found id: ""
	I0111 07:49:08.342110  182834 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:49:08.789411  182834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:49:08.803284  182834 pause.go:52] kubelet running: false
	I0111 07:49:08.803346  182834 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:49:08.911474  182834 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:49:08.911550  182834 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:49:08.977973  182834 cri.go:96] found id: "fb3d48a5461e3d903bd4f6a53010691c4575384f0fe1f9858577740063664cfc"
	I0111 07:49:08.977995  182834 cri.go:96] found id: "03c0e211978ecf5f182769c51bc9eacba756e284410fadbd73e155decf990fec"
	I0111 07:49:08.977999  182834 cri.go:96] found id: "39df3868e30b956ed7b402910c52eb3d3f645eff78410c5ced0e05e07cfb2605"
	I0111 07:49:08.978002  182834 cri.go:96] found id: "4cbe4980b4381d3cedbd6d8e5865265d8171d37f8ceeeece80e97c768b8baca8"
	I0111 07:49:08.978005  182834 cri.go:96] found id: "87587bcd281f1c0a2aef860698f2ac71a9f66c8ad72bd9d536d718035eb38bbc"
	I0111 07:49:08.978008  182834 cri.go:96] found id: "697756342f1b7fa4f1be176804f26aeac8f5640dc6a8e9956e55c932c1ec1e0c"
	I0111 07:49:08.978010  182834 cri.go:96] found id: "8cefc332f1bb940a084c44f74130ed4c2b329c7fb9503f73c7ec6a9e051e9362"
	I0111 07:49:08.978013  182834 cri.go:96] found id: ""
	I0111 07:49:08.978050  182834 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:49:09.398513  182834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:49:09.414063  182834 pause.go:52] kubelet running: false
	I0111 07:49:09.414120  182834 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:49:09.522253  182834 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:49:09.522336  182834 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:49:09.612382  182834 cri.go:96] found id: "fb3d48a5461e3d903bd4f6a53010691c4575384f0fe1f9858577740063664cfc"
	I0111 07:49:09.612402  182834 cri.go:96] found id: "03c0e211978ecf5f182769c51bc9eacba756e284410fadbd73e155decf990fec"
	I0111 07:49:09.612405  182834 cri.go:96] found id: "39df3868e30b956ed7b402910c52eb3d3f645eff78410c5ced0e05e07cfb2605"
	I0111 07:49:09.612408  182834 cri.go:96] found id: "4cbe4980b4381d3cedbd6d8e5865265d8171d37f8ceeeece80e97c768b8baca8"
	I0111 07:49:09.612411  182834 cri.go:96] found id: "87587bcd281f1c0a2aef860698f2ac71a9f66c8ad72bd9d536d718035eb38bbc"
	I0111 07:49:09.612414  182834 cri.go:96] found id: "697756342f1b7fa4f1be176804f26aeac8f5640dc6a8e9956e55c932c1ec1e0c"
	I0111 07:49:09.612417  182834 cri.go:96] found id: "8cefc332f1bb940a084c44f74130ed4c2b329c7fb9503f73c7ec6a9e051e9362"
	I0111 07:49:09.612419  182834 cri.go:96] found id: ""
	I0111 07:49:09.612461  182834 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:49:09.628396  182834 out.go:203] 
	W0111 07:49:09.629871  182834 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:49:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:49:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:49:09.629895  182834 out.go:285] * 
	* 
	W0111 07:49:09.631942  182834 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:49:09.633426  182834 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-979859 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-979859
helpers_test.go:244: (dbg) docker inspect pause-979859:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232",
	        "Created": "2026-01-11T07:48:19.862374935Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 167197,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:48:22.076887163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232/hosts",
	        "LogPath": "/var/lib/docker/containers/2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232/2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232-json.log",
	        "Name": "/pause-979859",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-979859:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-979859",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232",
	                "LowerDir": "/var/lib/docker/overlay2/9e1f653742904bed45ebe7e37f510af53e4a9cfd01a432978f779ec1d06d2ebb-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e1f653742904bed45ebe7e37f510af53e4a9cfd01a432978f779ec1d06d2ebb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e1f653742904bed45ebe7e37f510af53e4a9cfd01a432978f779ec1d06d2ebb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e1f653742904bed45ebe7e37f510af53e4a9cfd01a432978f779ec1d06d2ebb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-979859",
	                "Source": "/var/lib/docker/volumes/pause-979859/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-979859",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-979859",
	                "name.minikube.sigs.k8s.io": "pause-979859",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c0c58ae20feaafaaf7b56f26fba98030bbf2cb0945249febf541b63ea6a1e802",
	            "SandboxKey": "/var/run/docker/netns/c0c58ae20fea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32963"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-979859": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aecd8d6df546bb374391fc83e2a312baeedd7d04fe0f100e75529f02a50d32b7",
	                    "EndpointID": "a89499b8c8386715216720629093ab971835240b93b0994f7f0289d98752bc34",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "aa:8d:07:0d:f7:3f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-979859",
	                        "2e11570b3a5f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-979859 -n pause-979859
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-979859 -n pause-979859: exit status 2 (407.349071ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-979859 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-979859 logs -n 25: (1.107200402s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-369235 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │ 11 Jan 26 07:46 UTC │
	│ stop    │ -p scheduled-stop-369235 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --cancel-scheduled                                                                                              │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │ 11 Jan 26 07:46 UTC │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:47 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:47 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:47 UTC │ 11 Jan 26 07:47 UTC │
	│ delete  │ -p scheduled-stop-369235                                                                                                                 │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:47 UTC │ 11 Jan 26 07:47 UTC │
	│ start   │ -p insufficient-storage-671147 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-671147 │ jenkins │ v1.37.0 │ 11 Jan 26 07:47 UTC │                     │
	│ delete  │ -p insufficient-storage-671147                                                                                                           │ insufficient-storage-671147 │ jenkins │ v1.37.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:48 UTC │
	│ start   │ -p offline-crio-966080 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-966080         │ jenkins │ v1.37.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:49 UTC │
	│ start   │ -p pause-979859 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-979859                │ jenkins │ v1.37.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:49 UTC │
	│ start   │ -p stopped-upgrade-992502 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-992502      │ jenkins │ v1.35.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:48 UTC │
	│ start   │ -p running-upgrade-010890 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-010890      │ jenkins │ v1.35.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:48 UTC │
	│ stop    │ stopped-upgrade-992502 stop                                                                                                              │ stopped-upgrade-992502      │ jenkins │ v1.35.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:48 UTC │
	│ start   │ -p running-upgrade-010890 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-010890      │ jenkins │ v1.37.0 │ 11 Jan 26 07:48 UTC │                     │
	│ start   │ -p stopped-upgrade-992502 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-992502      │ jenkins │ v1.37.0 │ 11 Jan 26 07:48 UTC │                     │
	│ delete  │ -p offline-crio-966080                                                                                                                   │ offline-crio-966080         │ jenkins │ v1.37.0 │ 11 Jan 26 07:49 UTC │ 11 Jan 26 07:49 UTC │
	│ start   │ -p pause-979859 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-979859                │ jenkins │ v1.37.0 │ 11 Jan 26 07:49 UTC │ 11 Jan 26 07:49 UTC │
	│ start   │ -p kubernetes-upgrade-416144 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-416144   │ jenkins │ v1.37.0 │ 11 Jan 26 07:49 UTC │                     │
	│ pause   │ -p pause-979859 --alsologtostderr -v=5                                                                                                   │ pause-979859                │ jenkins │ v1.37.0 │ 11 Jan 26 07:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:49:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:49:03.624213  181612 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:49:03.624322  181612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:49:03.624332  181612 out.go:374] Setting ErrFile to fd 2...
	I0111 07:49:03.624336  181612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:49:03.624566  181612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:49:03.625100  181612 out.go:368] Setting JSON to false
	I0111 07:49:03.626272  181612 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1892,"bootTime":1768115852,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:49:03.626327  181612 start.go:143] virtualization: kvm guest
	I0111 07:49:03.628010  181612 out.go:179] * [kubernetes-upgrade-416144] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:49:03.629015  181612 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:49:03.629032  181612 notify.go:221] Checking for updates...
	I0111 07:49:03.631281  181612 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:49:03.632548  181612 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:49:03.633864  181612 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:49:03.634904  181612 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:49:03.635962  181612 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:49:03.637524  181612 config.go:182] Loaded profile config "pause-979859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:49:03.637638  181612 config.go:182] Loaded profile config "running-upgrade-010890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0111 07:49:03.637727  181612 config.go:182] Loaded profile config "stopped-upgrade-992502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0111 07:49:03.637843  181612 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:49:03.664620  181612 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:49:03.664700  181612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:49:03.724183  181612 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2026-01-11 07:49:03.712986258 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:49:03.724276  181612 docker.go:319] overlay module found
	I0111 07:49:03.726052  181612 out.go:179] * Using the docker driver based on user configuration
	I0111 07:49:03.727077  181612 start.go:309] selected driver: docker
	I0111 07:49:03.727091  181612 start.go:928] validating driver "docker" against <nil>
	I0111 07:49:03.727102  181612 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:49:03.727629  181612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:49:03.796250  181612 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2026-01-11 07:49:03.786416716 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:49:03.796435  181612 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:49:03.796677  181612 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 07:49:03.800918  181612 out.go:179] * Using Docker driver with root privileges
	I0111 07:49:03.802136  181612 cni.go:84] Creating CNI manager for ""
	I0111 07:49:03.802218  181612 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:49:03.802233  181612 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 07:49:03.802316  181612 start.go:353] cluster config:
	{Name:kubernetes-upgrade-416144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-416144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:49:03.803590  181612 out.go:179] * Starting "kubernetes-upgrade-416144" primary control-plane node in "kubernetes-upgrade-416144" cluster
	I0111 07:49:03.804469  181612 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:49:03.805497  181612 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:49:03.806523  181612 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 07:49:03.806556  181612 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:49:03.806568  181612 cache.go:65] Caching tarball of preloaded images
	I0111 07:49:03.806566  181612 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:49:03.806657  181612 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:49:03.806674  181612 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0111 07:49:03.806782  181612 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kubernetes-upgrade-416144/config.json ...
	I0111 07:49:03.806818  181612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kubernetes-upgrade-416144/config.json: {Name:mk266c72f0852130996f7bb577db78ce69fe6e67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:49:03.829445  181612 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:49:03.829462  181612 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:49:03.829480  181612 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:49:03.829506  181612 start.go:360] acquireMachinesLock for kubernetes-upgrade-416144: {Name:mk65ce35a8e0a0a28c7a78c6010937ed11fb4751 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:49:03.829590  181612 start.go:364] duration metric: took 70.358µs to acquireMachinesLock for "kubernetes-upgrade-416144"
	I0111 07:49:03.829612  181612 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-416144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-416144 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:49:03.829670  181612 start.go:125] createHost starting for "" (driver="docker")
	I0111 07:49:01.394934  180680 out.go:252] * Updating the running docker "pause-979859" container ...
	I0111 07:49:01.394970  180680 machine.go:94] provisionDockerMachine start ...
	I0111 07:49:01.395050  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:01.413430  180680 main.go:144] libmachine: Using SSH client type: native
	I0111 07:49:01.413783  180680 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0111 07:49:01.413814  180680 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 07:49:01.564562  180680 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-979859
	
	I0111 07:49:01.564590  180680 ubuntu.go:182] provisioning hostname "pause-979859"
	I0111 07:49:01.564750  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:01.590444  180680 main.go:144] libmachine: Using SSH client type: native
	I0111 07:49:01.590706  180680 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0111 07:49:01.590725  180680 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-979859 && echo "pause-979859" | sudo tee /etc/hostname
	I0111 07:49:01.754692  180680 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-979859
	
	I0111 07:49:01.754775  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:01.773775  180680 main.go:144] libmachine: Using SSH client type: native
	I0111 07:49:01.774006  180680 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0111 07:49:01.774022  180680 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-979859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-979859/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-979859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 07:49:01.920529  180680 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 07:49:01.920559  180680 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-4689/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-4689/.minikube}
	I0111 07:49:01.920581  180680 ubuntu.go:190] setting up certificates
	I0111 07:49:01.920594  180680 provision.go:84] configureAuth start
	I0111 07:49:01.920651  180680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-979859
	I0111 07:49:01.938685  180680 provision.go:143] copyHostCerts
	I0111 07:49:01.938745  180680 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem, removing ...
	I0111 07:49:01.938763  180680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem
	I0111 07:49:01.938882  180680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem (1082 bytes)
	I0111 07:49:01.939015  180680 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem, removing ...
	I0111 07:49:01.939027  180680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem
	I0111 07:49:01.939060  180680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem (1123 bytes)
	I0111 07:49:01.939137  180680 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem, removing ...
	I0111 07:49:01.939144  180680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem
	I0111 07:49:01.939170  180680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem (1675 bytes)
	I0111 07:49:01.939241  180680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem org=jenkins.pause-979859 san=[127.0.0.1 192.168.76.2 localhost minikube pause-979859]
	I0111 07:49:01.977564  180680 provision.go:177] copyRemoteCerts
	I0111 07:49:01.977629  180680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 07:49:01.977668  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:01.996781  180680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/pause-979859/id_rsa Username:docker}
	I0111 07:49:02.100090  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 07:49:02.117842  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0111 07:49:02.135364  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 07:49:02.152498  180680 provision.go:87] duration metric: took 231.890168ms to configureAuth
	I0111 07:49:02.152538  180680 ubuntu.go:206] setting minikube options for container-runtime
	I0111 07:49:02.152754  180680 config.go:182] Loaded profile config "pause-979859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:49:02.152895  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:02.171362  180680 main.go:144] libmachine: Using SSH client type: native
	I0111 07:49:02.171569  180680 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0111 07:49:02.171585  180680 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 07:49:02.509461  180680 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 07:49:02.509491  180680 machine.go:97] duration metric: took 1.11451225s to provisionDockerMachine
	I0111 07:49:02.509506  180680 start.go:293] postStartSetup for "pause-979859" (driver="docker")
	I0111 07:49:02.509548  180680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 07:49:02.509636  180680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 07:49:02.509699  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:02.528982  180680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/pause-979859/id_rsa Username:docker}
	I0111 07:49:02.635679  180680 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 07:49:02.639526  180680 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 07:49:02.639556  180680 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 07:49:02.639568  180680 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/addons for local assets ...
	I0111 07:49:02.639620  180680 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/files for local assets ...
	I0111 07:49:02.639723  180680 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem -> 82382.pem in /etc/ssl/certs
	I0111 07:49:02.639871  180680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 07:49:02.647976  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:49:02.665643  180680 start.go:296] duration metric: took 156.123256ms for postStartSetup
	I0111 07:49:02.665707  180680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:49:02.665752  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:02.687499  180680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/pause-979859/id_rsa Username:docker}
	I0111 07:49:02.787792  180680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 07:49:02.792682  180680 fix.go:56] duration metric: took 1.42052414s for fixHost
	I0111 07:49:02.792703  180680 start.go:83] releasing machines lock for "pause-979859", held for 1.420591868s
	I0111 07:49:02.792760  180680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-979859
	I0111 07:49:02.812130  180680 ssh_runner.go:195] Run: cat /version.json
	I0111 07:49:02.812197  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:02.812217  180680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 07:49:02.812292  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:02.832640  180680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/pause-979859/id_rsa Username:docker}
	I0111 07:49:02.832833  180680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/pause-979859/id_rsa Username:docker}
	I0111 07:49:02.990912  180680 ssh_runner.go:195] Run: systemctl --version
	I0111 07:49:02.998356  180680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 07:49:03.036713  180680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 07:49:03.042045  180680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 07:49:03.042133  180680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 07:49:03.052365  180680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 07:49:03.052388  180680 start.go:496] detecting cgroup driver to use...
	I0111 07:49:03.052429  180680 detect.go:178] detected "systemd" cgroup driver on host os
	I0111 07:49:03.052485  180680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 07:49:03.068968  180680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 07:49:03.081610  180680 docker.go:218] disabling cri-docker service (if available) ...
	I0111 07:49:03.081655  180680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 07:49:03.096771  180680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 07:49:03.109148  180680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 07:49:03.218738  180680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 07:49:03.331908  180680 docker.go:234] disabling docker service ...
	I0111 07:49:03.331964  180680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 07:49:03.349832  180680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 07:49:03.363608  180680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 07:49:03.475027  180680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 07:49:03.595043  180680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 07:49:03.608692  180680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 07:49:03.623517  180680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 07:49:03.623572  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.633308  180680 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 07:49:03.633376  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.642990  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.653084  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.663076  180680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 07:49:03.671251  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.680932  180680 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.693019  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.702579  180680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 07:49:03.710855  180680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 07:49:03.718871  180680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:49:03.842922  180680 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 07:49:04.040722  180680 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 07:49:04.040826  180680 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 07:49:04.045145  180680 start.go:574] Will wait 60s for crictl version
	I0111 07:49:04.045204  180680 ssh_runner.go:195] Run: which crictl
	I0111 07:49:04.049467  180680 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 07:49:04.077825  180680 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 07:49:04.077929  180680 ssh_runner.go:195] Run: crio --version
	I0111 07:49:04.110341  180680 ssh_runner.go:195] Run: crio --version
	I0111 07:49:04.138678  180680 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 07:49:04.139874  180680 cli_runner.go:164] Run: docker network inspect pause-979859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:49:04.159669  180680 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 07:49:04.164593  180680 kubeadm.go:884] updating cluster {Name:pause-979859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-979859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 07:49:04.164780  180680 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:49:04.164864  180680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:49:04.200441  180680 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:49:04.200466  180680 crio.go:433] Images already preloaded, skipping extraction
	I0111 07:49:04.200520  180680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:49:04.225929  180680 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:49:04.225952  180680 cache_images.go:86] Images are preloaded, skipping loading
	I0111 07:49:04.225959  180680 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0111 07:49:04.226060  180680 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-979859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-979859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 07:49:04.226146  180680 ssh_runner.go:195] Run: crio config
	I0111 07:49:04.286907  180680 cni.go:84] Creating CNI manager for ""
	I0111 07:49:04.286936  180680 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:49:04.286956  180680 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 07:49:04.286988  180680 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-979859 NodeName:pause-979859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 07:49:04.287170  180680 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-979859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 07:49:04.287248  180680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 07:49:04.296480  180680 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 07:49:04.296535  180680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 07:49:04.304826  180680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0111 07:49:04.319131  180680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 07:49:04.333602  180680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I0111 07:49:04.366509  180680 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 07:49:04.371465  180680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:49:04.510685  180680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:49:04.524699  180680 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859 for IP: 192.168.76.2
	I0111 07:49:04.524718  180680 certs.go:195] generating shared ca certs ...
	I0111 07:49:04.524732  180680 certs.go:227] acquiring lock for ca certs: {Name:mk9f7a57f61b85f2c0f52e44b6a926769e368f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:49:04.524886  180680 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key
	I0111 07:49:04.524924  180680 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key
	I0111 07:49:04.524931  180680 certs.go:257] generating profile certs ...
	I0111 07:49:04.525021  180680 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/client.key
	I0111 07:49:04.525062  180680 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/apiserver.key.a6d7a2a3
	I0111 07:49:04.525094  180680 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/proxy-client.key
	I0111 07:49:04.525194  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem (1338 bytes)
	W0111 07:49:04.525235  180680 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238_empty.pem, impossibly tiny 0 bytes
	I0111 07:49:04.525246  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 07:49:04.525275  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem (1082 bytes)
	I0111 07:49:04.525295  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem (1123 bytes)
	I0111 07:49:04.525314  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem (1675 bytes)
	I0111 07:49:04.525351  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:49:04.526026  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 07:49:04.549749  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 07:49:04.577332  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 07:49:04.597205  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 07:49:04.618605  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0111 07:49:04.639546  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 07:49:04.659515  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 07:49:04.677954  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 07:49:04.699050  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 07:49:04.717753  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem --> /usr/share/ca-certificates/8238.pem (1338 bytes)
	I0111 07:49:04.735791  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /usr/share/ca-certificates/82382.pem (1708 bytes)
	I0111 07:49:04.754299  180680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 07:49:04.771646  180680 ssh_runner.go:195] Run: openssl version
	I0111 07:49:04.777982  180680 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:49:04.785836  180680 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 07:49:04.793646  180680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:49:04.797952  180680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:23 /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:49:04.798013  180680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:49:04.834610  180680 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 07:49:04.842832  180680 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8238.pem
	I0111 07:49:04.851020  180680 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8238.pem /etc/ssl/certs/8238.pem
	I0111 07:49:04.859685  180680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8238.pem
	I0111 07:49:04.863632  180680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:27 /usr/share/ca-certificates/8238.pem
	I0111 07:49:04.863694  180680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8238.pem
	I0111 07:49:04.901965  180680 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 07:49:04.910700  180680 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/82382.pem
	I0111 07:49:04.918601  180680 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/82382.pem /etc/ssl/certs/82382.pem
	I0111 07:49:04.926258  180680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82382.pem
	I0111 07:49:04.930066  180680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:27 /usr/share/ca-certificates/82382.pem
	I0111 07:49:04.930131  180680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82382.pem
	I0111 07:49:04.969549  180680 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 07:49:04.977983  180680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 07:49:04.982225  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 07:49:05.019102  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 07:49:05.054090  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 07:49:05.089884  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 07:49:05.135289  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 07:49:05.171131  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 07:49:05.217191  180680 kubeadm.go:401] StartCluster: {Name:pause-979859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-979859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:49:05.217298  180680 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:49:05.217369  180680 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:49:05.248764  180680 cri.go:96] found id: "fb3d48a5461e3d903bd4f6a53010691c4575384f0fe1f9858577740063664cfc"
	I0111 07:49:05.248792  180680 cri.go:96] found id: "03c0e211978ecf5f182769c51bc9eacba756e284410fadbd73e155decf990fec"
	I0111 07:49:05.248821  180680 cri.go:96] found id: "39df3868e30b956ed7b402910c52eb3d3f645eff78410c5ced0e05e07cfb2605"
	I0111 07:49:05.248828  180680 cri.go:96] found id: "4cbe4980b4381d3cedbd6d8e5865265d8171d37f8ceeeece80e97c768b8baca8"
	I0111 07:49:05.248834  180680 cri.go:96] found id: "87587bcd281f1c0a2aef860698f2ac71a9f66c8ad72bd9d536d718035eb38bbc"
	I0111 07:49:05.248840  180680 cri.go:96] found id: "697756342f1b7fa4f1be176804f26aeac8f5640dc6a8e9956e55c932c1ec1e0c"
	I0111 07:49:05.248846  180680 cri.go:96] found id: "8cefc332f1bb940a084c44f74130ed4c2b329c7fb9503f73c7ec6a9e051e9362"
	I0111 07:49:05.248852  180680 cri.go:96] found id: ""
	I0111 07:49:05.248916  180680 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 07:49:05.261644  180680 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:49:05Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:49:05.261761  180680 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 07:49:05.270416  180680 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 07:49:05.270435  180680 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 07:49:05.270496  180680 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 07:49:05.278050  180680 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:49:05.278720  180680 kubeconfig.go:125] found "pause-979859" server: "https://192.168.76.2:8443"
	I0111 07:49:05.279651  180680 kapi.go:59] client config for pause-979859: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/client.crt", KeyFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/client.key", CAFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0111 07:49:05.280102  180680 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0111 07:49:05.280118  180680 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0111 07:49:05.280123  180680 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0111 07:49:05.280126  180680 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0111 07:49:05.280130  180680 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0111 07:49:05.280138  180680 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0111 07:49:05.280454  180680 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 07:49:05.289478  180680 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0111 07:49:05.289507  180680 kubeadm.go:602] duration metric: took 19.066744ms to restartPrimaryControlPlane
	I0111 07:49:05.289514  180680 kubeadm.go:403] duration metric: took 72.342455ms to StartCluster
	I0111 07:49:05.289528  180680 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:49:05.289604  180680 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:49:05.291617  180680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:49:05.292427  180680 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:49:05.292512  180680 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:49:05.292577  180680 config.go:182] Loaded profile config "pause-979859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:49:05.298499  180680 out.go:179] * Enabled addons: 
	I0111 07:49:05.298500  180680 out.go:179] * Verifying Kubernetes components...
	I0111 07:49:04.644887  177360 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0111 07:49:04.644951  177360 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:49:05.299859  180680 addons.go:530] duration metric: took 7.354777ms for enable addons: enabled=[]
	I0111 07:49:05.299884  180680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:49:05.412089  180680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:49:05.425138  180680 node_ready.go:35] waiting up to 6m0s for node "pause-979859" to be "Ready" ...
	I0111 07:49:05.432713  180680 node_ready.go:49] node "pause-979859" is "Ready"
	I0111 07:49:05.432741  180680 node_ready.go:38] duration metric: took 7.552152ms for node "pause-979859" to be "Ready" ...
	I0111 07:49:05.432757  180680 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:49:05.432830  180680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:49:05.444569  180680 api_server.go:72] duration metric: took 152.103738ms to wait for apiserver process to appear ...
	I0111 07:49:05.444614  180680 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:49:05.444636  180680 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 07:49:05.448726  180680 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0111 07:49:05.449577  180680 api_server.go:141] control plane version: v1.35.0
	I0111 07:49:05.449599  180680 api_server.go:131] duration metric: took 4.976043ms to wait for apiserver health ...
	I0111 07:49:05.449607  180680 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:49:05.452775  180680 system_pods.go:59] 7 kube-system pods found
	I0111 07:49:05.452832  180680 system_pods.go:61] "coredns-7d764666f9-p8c2j" [aa1e3126-8605-4b65-8bd5-601f8a9e4971] Running
	I0111 07:49:05.452844  180680 system_pods.go:61] "etcd-pause-979859" [fc68f4d1-8c13-4bc8-9b74-65ff40bb8de1] Running
	I0111 07:49:05.452851  180680 system_pods.go:61] "kindnet-8zgsx" [92894192-3035-436e-bad2-126e9c157984] Running
	I0111 07:49:05.452858  180680 system_pods.go:61] "kube-apiserver-pause-979859" [209a55cc-a23c-4bb8-bb8f-48375d938576] Running
	I0111 07:49:05.452865  180680 system_pods.go:61] "kube-controller-manager-pause-979859" [5120c914-5c48-4c18-a6b3-812d88e8ea84] Running
	I0111 07:49:05.452874  180680 system_pods.go:61] "kube-proxy-99tf8" [92494e6a-e314-4b9c-93d4-5a15fe8e759f] Running
	I0111 07:49:05.452879  180680 system_pods.go:61] "kube-scheduler-pause-979859" [2970e557-ea1b-4466-a213-954e07b90bec] Running
	I0111 07:49:05.452887  180680 system_pods.go:74] duration metric: took 3.274456ms to wait for pod list to return data ...
	I0111 07:49:05.452900  180680 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:49:05.454785  180680 default_sa.go:45] found service account: "default"
	I0111 07:49:05.454833  180680 default_sa.go:55] duration metric: took 1.924957ms for default service account to be created ...
	I0111 07:49:05.454843  180680 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:49:05.457375  180680 system_pods.go:86] 7 kube-system pods found
	I0111 07:49:05.457407  180680 system_pods.go:89] "coredns-7d764666f9-p8c2j" [aa1e3126-8605-4b65-8bd5-601f8a9e4971] Running
	I0111 07:49:05.457414  180680 system_pods.go:89] "etcd-pause-979859" [fc68f4d1-8c13-4bc8-9b74-65ff40bb8de1] Running
	I0111 07:49:05.457417  180680 system_pods.go:89] "kindnet-8zgsx" [92894192-3035-436e-bad2-126e9c157984] Running
	I0111 07:49:05.457421  180680 system_pods.go:89] "kube-apiserver-pause-979859" [209a55cc-a23c-4bb8-bb8f-48375d938576] Running
	I0111 07:49:05.457425  180680 system_pods.go:89] "kube-controller-manager-pause-979859" [5120c914-5c48-4c18-a6b3-812d88e8ea84] Running
	I0111 07:49:05.457429  180680 system_pods.go:89] "kube-proxy-99tf8" [92494e6a-e314-4b9c-93d4-5a15fe8e759f] Running
	I0111 07:49:05.457432  180680 system_pods.go:89] "kube-scheduler-pause-979859" [2970e557-ea1b-4466-a213-954e07b90bec] Running
	I0111 07:49:05.457438  180680 system_pods.go:126] duration metric: took 2.589106ms to wait for k8s-apps to be running ...
	I0111 07:49:05.457447  180680 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:49:05.457486  180680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:49:05.471475  180680 system_svc.go:56] duration metric: took 14.019648ms WaitForService to wait for kubelet
	I0111 07:49:05.471502  180680 kubeadm.go:587] duration metric: took 179.040488ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:49:05.471523  180680 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:49:05.474478  180680 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:49:05.474509  180680 node_conditions.go:123] node cpu capacity is 8
	I0111 07:49:05.474527  180680 node_conditions.go:105] duration metric: took 2.99888ms to run NodePressure ...
	I0111 07:49:05.474545  180680 start.go:242] waiting for startup goroutines ...
	I0111 07:49:05.474556  180680 start.go:247] waiting for cluster config update ...
	I0111 07:49:05.474568  180680 start.go:256] writing updated cluster config ...
	I0111 07:49:05.474920  180680 ssh_runner.go:195] Run: rm -f paused
	I0111 07:49:05.479259  180680 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:49:05.480081  180680 kapi.go:59] client config for pause-979859: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/client.crt", KeyFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/client.key", CAFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0111 07:49:05.483064  180680 pod_ready.go:83] waiting for pod "coredns-7d764666f9-p8c2j" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.486928  180680 pod_ready.go:94] pod "coredns-7d764666f9-p8c2j" is "Ready"
	I0111 07:49:05.486947  180680 pod_ready.go:86] duration metric: took 3.862051ms for pod "coredns-7d764666f9-p8c2j" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.488880  180680 pod_ready.go:83] waiting for pod "etcd-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.492616  180680 pod_ready.go:94] pod "etcd-pause-979859" is "Ready"
	I0111 07:49:05.492635  180680 pod_ready.go:86] duration metric: took 3.730305ms for pod "etcd-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.494363  180680 pod_ready.go:83] waiting for pod "kube-apiserver-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.498226  180680 pod_ready.go:94] pod "kube-apiserver-pause-979859" is "Ready"
	I0111 07:49:05.498249  180680 pod_ready.go:86] duration metric: took 3.862098ms for pod "kube-apiserver-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.500104  180680 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.883657  180680 pod_ready.go:94] pod "kube-controller-manager-pause-979859" is "Ready"
	I0111 07:49:05.883690  180680 pod_ready.go:86] duration metric: took 383.562542ms for pod "kube-controller-manager-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:06.083735  180680 pod_ready.go:83] waiting for pod "kube-proxy-99tf8" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:06.484037  180680 pod_ready.go:94] pod "kube-proxy-99tf8" is "Ready"
	I0111 07:49:06.484068  180680 pod_ready.go:86] duration metric: took 400.301845ms for pod "kube-proxy-99tf8" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:06.684196  180680 pod_ready.go:83] waiting for pod "kube-scheduler-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:07.083551  180680 pod_ready.go:94] pod "kube-scheduler-pause-979859" is "Ready"
	I0111 07:49:07.083582  180680 pod_ready.go:86] duration metric: took 399.355447ms for pod "kube-scheduler-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:07.083598  180680 pod_ready.go:40] duration metric: took 1.604309709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:49:07.128819  180680 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0111 07:49:07.166003  180680 out.go:179] * Done! kubectl is now configured to use "pause-979859" cluster and "default" namespace by default
	I0111 07:49:03.831362  181612 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 07:49:03.831568  181612 start.go:159] libmachine.API.Create for "kubernetes-upgrade-416144" (driver="docker")
	I0111 07:49:03.831593  181612 client.go:173] LocalClient.Create starting
	I0111 07:49:03.831650  181612 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem
	I0111 07:49:03.831692  181612 main.go:144] libmachine: Decoding PEM data...
	I0111 07:49:03.831716  181612 main.go:144] libmachine: Parsing certificate...
	I0111 07:49:03.831767  181612 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem
	I0111 07:49:03.831790  181612 main.go:144] libmachine: Decoding PEM data...
	I0111 07:49:03.831852  181612 main.go:144] libmachine: Parsing certificate...
	I0111 07:49:03.832179  181612 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-416144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 07:49:03.850010  181612 cli_runner.go:211] docker network inspect kubernetes-upgrade-416144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 07:49:03.850079  181612 network_create.go:284] running [docker network inspect kubernetes-upgrade-416144] to gather additional debugging logs...
	I0111 07:49:03.850101  181612 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-416144
	W0111 07:49:03.868349  181612 cli_runner.go:211] docker network inspect kubernetes-upgrade-416144 returned with exit code 1
	I0111 07:49:03.868376  181612 network_create.go:287] error running [docker network inspect kubernetes-upgrade-416144]: docker network inspect kubernetes-upgrade-416144: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-416144 not found
	I0111 07:49:03.868390  181612 network_create.go:289] output of [docker network inspect kubernetes-upgrade-416144]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-416144 not found
	
	** /stderr **
	I0111 07:49:03.868501  181612 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:49:03.886928  181612 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-97182e010def IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7c:25:92:4b:a6} reservation:<nil>}
	I0111 07:49:03.887434  181612 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2c82d96e68c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:e5:3c:51:ff:2e} reservation:<nil>}
	I0111 07:49:03.887933  181612 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a487bdecf87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:a6:1b:9d:11:3d} reservation:<nil>}
	I0111 07:49:03.888436  181612 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-aecd8d6df546 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:b2:1b:fc:20:f2} reservation:<nil>}
	I0111 07:49:03.889181  181612 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f184d0}
	I0111 07:49:03.889212  181612 network_create.go:124] attempt to create docker network kubernetes-upgrade-416144 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 07:49:03.889260  181612 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-416144 kubernetes-upgrade-416144
	I0111 07:49:03.941031  181612 network_create.go:108] docker network kubernetes-upgrade-416144 192.168.85.0/24 created
	I0111 07:49:03.941067  181612 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-416144" container
	I0111 07:49:03.941142  181612 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 07:49:03.961707  181612 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-416144 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-416144 --label created_by.minikube.sigs.k8s.io=true
	I0111 07:49:03.981716  181612 oci.go:103] Successfully created a docker volume kubernetes-upgrade-416144
	I0111 07:49:03.981811  181612 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-416144-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-416144 --entrypoint /usr/bin/test -v kubernetes-upgrade-416144:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 07:49:04.414513  181612 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-416144
	I0111 07:49:04.414591  181612 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 07:49:04.414604  181612 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 07:49:04.415050  181612 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-416144:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 07:49:04.545976  176771 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 914d4aed25cfd45075612ec9d7ddf868d011974f7b31e21b7c1cdc35d471cea8 f3c9ba4f3e5cd8428df44dbe59d03852f25f3837b76d9203e6ecb23e7a6f2c3e 3131a5f033877e6f8988925bfe183633dc80a2dab8eae1183d75736367901136 3c7413176aaff01341bb76bb3ad734f63709042b8053fad4b72da3195eee13bc bdf6fe4113ab43d130044ff170b2c826b642fa3e776794c980ed4b74ff71fd47 31ab284fa880a55b0c56d9021f3e815e22329e6bb89ba430c5d158b67e01dfbc 97f478d699abfe3535adb73be64057a384396a9b1e90a1dbe1f564a57c953751: (11.026668496s)
	I0111 07:49:04.546055  176771 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0111 07:49:04.594222  176771 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 07:49:04.607476  176771 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5651 Jan 11 07:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan 11 07:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jan 11 07:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 11 07:48 /etc/kubernetes/scheduler.conf
	
	I0111 07:49:04.607540  176771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 07:49:04.620642  176771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 07:49:04.632603  176771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 07:49:04.644614  176771 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:49:04.644678  176771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 07:49:04.654333  176771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 07:49:04.665839  176771 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:49:04.665902  176771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 07:49:04.676054  176771 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 07:49:04.687947  176771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 07:49:04.735502  176771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 07:49:06.060991  176771 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.32544916s)
	I0111 07:49:06.061063  176771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0111 07:49:06.279746  176771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 07:49:06.334525  176771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0111 07:49:06.396058  176771 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:49:06.396128  176771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:49:06.896995  176771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:49:07.397029  176771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:49:07.412942  176771 api_server.go:72] duration metric: took 1.01689379s to wait for apiserver process to appear ...
	I0111 07:49:07.412966  176771 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:49:07.412996  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:08.370636  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 07:49:08.370665  176771 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 07:49:08.370680  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:08.398856  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 07:49:08.398909  176771 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 07:49:08.413048  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:08.718873  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:49:08.718915  176771 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:49:08.913305  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:08.917290  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:49:08.917322  176771 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:49:09.413950  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:09.551351  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:49:09.551382  176771 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:49:09.913895  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:09.918558  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0111 07:49:09.924700  176771 api_server.go:141] control plane version: v1.32.0
	I0111 07:49:09.924731  176771 api_server.go:131] duration metric: took 2.511757337s to wait for apiserver health ...
	I0111 07:49:09.924743  176771 cni.go:84] Creating CNI manager for ""
	I0111 07:49:09.924751  176771 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:49:09.926784  176771 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.947071242Z" level=info msg="RDT not available in the host system"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.947086484Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.947904758Z" level=info msg="Conmon does support the --sync option"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.947922177Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.947934587Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.948742288Z" level=info msg="Conmon does support the --sync option"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.948757208Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.953878688Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.953898913Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.954498348Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"en
forcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [cri
o.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.954881137Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.954940217Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.033901204Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-p8c2j Namespace:kube-system ID:aa6d3b81020f919ed00e576c65ccc2028eae17b1de235a3acbf5b984f13481ec UID:aa1e3126-8605-4b65-8bd5-601f8a9e4971 NetNS:/var/run/netns/d235702f-1f38-4abc-bd99-a56ecc8aa9d7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00043c268}] Aliases:map[]}"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034145319Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-p8c2j for CNI network kindnet (type=ptp)"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034580158Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034605064Z" level=info msg="Starting seccomp notifier watcher"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034665145Z" level=info msg="Create NRI interface"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034786125Z" level=info msg="built-in NRI default validator is disabled"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034811473Z" level=info msg="runtime interface created"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034829582Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034844545Z" level=info msg="runtime interface starting up..."
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034851498Z" level=info msg="starting plugins..."
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034866641Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.035253688Z" level=info msg="No systemd watchdog enabled"
	Jan 11 07:49:04 pause-979859 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	fb3d48a5461e3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     12 seconds ago      Running             coredns                   0                   aa6d3b81020f9       coredns-7d764666f9-p8c2j               kube-system
	03c0e211978ec       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   23 seconds ago      Running             kindnet-cni               0                   6d4f5c02f660a       kindnet-8zgsx                          kube-system
	39df3868e30b9       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     25 seconds ago      Running             kube-proxy                0                   90ad97c6222ef       kube-proxy-99tf8                       kube-system
	4cbe4980b4381       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     35 seconds ago      Running             kube-scheduler            0                   965d6007f2903       kube-scheduler-pause-979859            kube-system
	87587bcd281f1       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     35 seconds ago      Running             kube-apiserver            0                   7f1b33272110f       kube-apiserver-pause-979859            kube-system
	697756342f1b7       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     35 seconds ago      Running             etcd                      0                   ad023adbc68b5       etcd-pause-979859                      kube-system
	8cefc332f1bb9       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     35 seconds ago      Running             kube-controller-manager   0                   2572503136cee       kube-controller-manager-pause-979859   kube-system
	
	
	==> coredns [fb3d48a5461e3d903bd4f6a53010691c4575384f0fe1f9858577740063664cfc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:57441 - 17508 "HINFO IN 3055222049047057707.2413179664257208072. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.420065937s
	
	
	==> describe nodes <==
	Name:               pause-979859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-979859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=pause-979859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_48_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:48:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-979859
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:49:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:48:57 +0000   Sun, 11 Jan 2026 07:48:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:48:57 +0000   Sun, 11 Jan 2026 07:48:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:48:57 +0000   Sun, 11 Jan 2026 07:48:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:48:57 +0000   Sun, 11 Jan 2026 07:48:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-979859
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                1c09a476-6e50-457b-bc42-6b9d3757c9af
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-p8c2j                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-979859                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-8zgsx                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-979859             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-979859    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-99tf8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-979859             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node pause-979859 event: Registered Node pause-979859 in Controller
	
	
	==> dmesg <==
	[Jan11 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001873] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.377362] i8042: Warning: Keylock active
	[  +0.013119] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.487024] block sda: the capability attribute has been deprecated.
	[  +0.081163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022550] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.654171] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [697756342f1b7fa4f1be176804f26aeac8f5640dc6a8e9956e55c932c1ec1e0c] <==
	{"level":"info","ts":"2026-01-11T07:48:35.871069Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:48:35.871087Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:48:35.871784Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T07:48:35.871822Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:48:35.871841Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-11T07:48:35.871850Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T07:48:35.872473Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:48:35.872988Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:48:35.872983Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-979859 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:48:35.873028Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:48:35.873224Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:48:35.873244Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:48:35.873415Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:48:35.873511Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:48:35.873538Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:48:35.873572Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T07:48:35.873656Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T07:48:35.874301Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:48:35.874295Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:48:35.878377Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:48:35.878592Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-11T07:48:49.748249Z","caller":"traceutil/trace.go:172","msg":"trace[1693887841] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"127.692339ms","start":"2026-01-11T07:48:49.620531Z","end":"2026-01-11T07:48:49.748223Z","steps":["trace[1693887841] 'process raft request'  (duration: 102.968724ms)","trace[1693887841] 'compare'  (duration: 24.605351ms)"],"step_count":2}
	{"level":"error","ts":"2026-01-11T07:49:07.801988Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: context canceled\n[+]non_learner ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2294\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2822\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3301\nnet/http.(*conn).serve\n\tnet/http/server.go:2102"}
	{"level":"info","ts":"2026-01-11T07:49:07.918708Z","caller":"traceutil/trace.go:172","msg":"trace[907106383] linearizableReadLoop","detail":"{readStateIndex:474; appliedIndex:474; }","duration":"148.441975ms","start":"2026-01-11T07:49:07.770242Z","end":"2026-01-11T07:49:07.918684Z","steps":["trace[907106383] 'read index received'  (duration: 148.434172ms)","trace[907106383] 'applied index is now lower than readState.Index'  (duration: 6.746µs)"],"step_count":2}
	{"level":"info","ts":"2026-01-11T07:49:07.918871Z","caller":"traceutil/trace.go:172","msg":"trace[1713011793] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"181.193079ms","start":"2026-01-11T07:49:07.737663Z","end":"2026-01-11T07:49:07.918856Z","steps":["trace[1713011793] 'process raft request'  (duration: 181.049784ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:49:10 up 31 min,  0 user,  load average: 7.96, 3.19, 1.84
	Linux pause-979859 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03c0e211978ecf5f182769c51bc9eacba756e284410fadbd73e155decf990fec] <==
	I0111 07:48:47.412832       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:48:47.413281       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 07:48:47.413434       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:48:47.413455       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:48:47.413466       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:48:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:48:47.613548       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:48:47.613637       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:48:47.613653       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:48:47.613781       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:48:47.913714       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:48:47.913739       1 metrics.go:72] Registering metrics
	I0111 07:48:47.913855       1 controller.go:711] "Syncing nftables rules"
	I0111 07:48:57.615922       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:48:57.616012       1 main.go:301] handling current node
	I0111 07:49:07.615054       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:49:07.615099       1 main.go:301] handling current node
	
	
	==> kube-apiserver [87587bcd281f1c0a2aef860698f2ac71a9f66c8ad72bd9d536d718035eb38bbc] <==
	I0111 07:48:37.040160       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 07:48:37.040211       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 07:48:37.040234       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 07:48:37.040546       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 07:48:37.047755       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:48:37.048638       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:48:37.049396       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0111 07:48:37.059972       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:48:37.938231       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0111 07:48:37.941682       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0111 07:48:37.941698       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:48:38.386372       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:48:38.424257       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:48:38.552114       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 07:48:38.559110       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0111 07:48:38.560705       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:48:38.566514       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:48:38.985009       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:48:39.639758       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:48:39.651148       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 07:48:39.657619       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0111 07:48:44.641665       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:48:44.795626       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:48:44.809584       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:48:44.989540       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8cefc332f1bb940a084c44f74130ed4c2b329c7fb9503f73c7ec6a9e051e9362] <==
	I0111 07:48:43.798269       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797173       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.798284       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.798260       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797148       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797336       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.798000       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797227       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797352       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.798496       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0111 07:48:43.797226       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.800937       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-979859"
	I0111 07:48:43.801026       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0111 07:48:43.801076       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.801705       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.802339       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797197       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.807793       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:48:43.810851       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.813540       1 range_allocator.go:433] "Set node PodCIDR" node="pause-979859" podCIDRs=["10.244.0.0/24"]
	I0111 07:48:43.897161       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.897235       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:48:43.897245       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:48:43.908649       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:58.804039       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [39df3868e30b956ed7b402910c52eb3d3f645eff78410c5ced0e05e07cfb2605] <==
	I0111 07:48:45.457104       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:48:45.523348       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:48:45.624087       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:45.624139       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 07:48:45.624230       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:48:45.664288       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:48:45.664357       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:48:45.673066       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:48:45.675052       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:48:45.675084       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:48:45.677400       1 config.go:200] "Starting service config controller"
	I0111 07:48:45.677473       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:48:45.677518       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:48:45.677561       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:48:45.677593       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:48:45.677616       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:48:45.677684       1 config.go:309] "Starting node config controller"
	I0111 07:48:45.677709       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:48:45.677750       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:48:45.778630       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:48:45.778749       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:48:45.778766       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4cbe4980b4381d3cedbd6d8e5865265d8171d37f8ceeeece80e97c768b8baca8] <==
	E0111 07:48:37.011945       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 07:48:37.015591       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 07:48:37.015734       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 07:48:37.019868       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 07:48:37.020009       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 07:48:37.020083       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 07:48:37.020167       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 07:48:37.033019       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 07:48:37.033150       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 07:48:37.033228       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 07:48:37.033266       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 07:48:37.033314       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 07:48:37.033373       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 07:48:37.033420       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 07:48:37.033477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 07:48:37.033571       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 07:48:37.033634       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 07:48:37.036866       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 07:48:37.869718       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0111 07:48:38.015039       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 07:48:38.081775       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 07:48:38.117285       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 07:48:38.155456       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 07:48:38.182559       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	I0111 07:48:40.706210       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.032520    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/92894192-3035-436e-bad2-126e9c157984-cni-cfg\") pod \"kindnet-8zgsx\" (UID: \"92894192-3035-436e-bad2-126e9c157984\") " pod="kube-system/kindnet-8zgsx"
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.032563    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92494e6a-e314-4b9c-93d4-5a15fe8e759f-lib-modules\") pod \"kube-proxy-99tf8\" (UID: \"92494e6a-e314-4b9c-93d4-5a15fe8e759f\") " pod="kube-system/kube-proxy-99tf8"
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.032604    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77cc\" (UniqueName: \"kubernetes.io/projected/92894192-3035-436e-bad2-126e9c157984-kube-api-access-d77cc\") pod \"kindnet-8zgsx\" (UID: \"92894192-3035-436e-bad2-126e9c157984\") " pod="kube-system/kindnet-8zgsx"
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.032633    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92494e6a-e314-4b9c-93d4-5a15fe8e759f-kube-proxy\") pod \"kube-proxy-99tf8\" (UID: \"92494e6a-e314-4b9c-93d4-5a15fe8e759f\") " pod="kube-system/kube-proxy-99tf8"
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.032658    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbkcm\" (UniqueName: \"kubernetes.io/projected/92494e6a-e314-4b9c-93d4-5a15fe8e759f-kube-api-access-xbkcm\") pod \"kube-proxy-99tf8\" (UID: \"92494e6a-e314-4b9c-93d4-5a15fe8e759f\") " pod="kube-system/kube-proxy-99tf8"
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.576338    1295 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-99tf8" podStartSLOduration=0.576316915 podStartE2EDuration="576.316915ms" podCreationTimestamp="2026-01-11 07:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:48:45.57616911 +0000 UTC m=+6.161597572" watchObservedRunningTime="2026-01-11 07:48:45.576316915 +0000 UTC m=+6.161745377"
	Jan 11 07:48:47 pause-979859 kubelet[1295]: I0111 07:48:47.581610    1295 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-8zgsx" podStartSLOduration=1.8986988089999999 podStartE2EDuration="3.581592626s" podCreationTimestamp="2026-01-11 07:48:44 +0000 UTC" firstStartedPulling="2026-01-11 07:48:45.33435978 +0000 UTC m=+5.919788242" lastFinishedPulling="2026-01-11 07:48:47.017253603 +0000 UTC m=+7.602682059" observedRunningTime="2026-01-11 07:48:47.581472025 +0000 UTC m=+8.166900487" watchObservedRunningTime="2026-01-11 07:48:47.581592626 +0000 UTC m=+8.167021100"
	Jan 11 07:48:48 pause-979859 kubelet[1295]: E0111 07:48:48.422271    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-979859" containerName="kube-apiserver"
	Jan 11 07:48:48 pause-979859 kubelet[1295]: E0111 07:48:48.976504    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-979859" containerName="kube-controller-manager"
	Jan 11 07:48:49 pause-979859 kubelet[1295]: E0111 07:48:49.604746    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-979859" containerName="etcd"
	Jan 11 07:48:53 pause-979859 kubelet[1295]: E0111 07:48:53.200453    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-979859" containerName="kube-scheduler"
	Jan 11 07:48:57 pause-979859 kubelet[1295]: I0111 07:48:57.790513    1295 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 11 07:48:57 pause-979859 kubelet[1295]: I0111 07:48:57.920990    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa1e3126-8605-4b65-8bd5-601f8a9e4971-config-volume\") pod \"coredns-7d764666f9-p8c2j\" (UID: \"aa1e3126-8605-4b65-8bd5-601f8a9e4971\") " pod="kube-system/coredns-7d764666f9-p8c2j"
	Jan 11 07:48:57 pause-979859 kubelet[1295]: I0111 07:48:57.921038    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47ljz\" (UniqueName: \"kubernetes.io/projected/aa1e3126-8605-4b65-8bd5-601f8a9e4971-kube-api-access-47ljz\") pod \"coredns-7d764666f9-p8c2j\" (UID: \"aa1e3126-8605-4b65-8bd5-601f8a9e4971\") " pod="kube-system/coredns-7d764666f9-p8c2j"
	Jan 11 07:48:58 pause-979859 kubelet[1295]: E0111 07:48:58.428402    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-979859" containerName="kube-apiserver"
	Jan 11 07:48:58 pause-979859 kubelet[1295]: E0111 07:48:58.594854    1295 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p8c2j" containerName="coredns"
	Jan 11 07:48:58 pause-979859 kubelet[1295]: I0111 07:48:58.623769    1295 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-p8c2j" podStartSLOduration=13.623747228 podStartE2EDuration="13.623747228s" podCreationTimestamp="2026-01-11 07:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:48:58.610263587 +0000 UTC m=+19.195692049" watchObservedRunningTime="2026-01-11 07:48:58.623747228 +0000 UTC m=+19.209175690"
	Jan 11 07:48:58 pause-979859 kubelet[1295]: E0111 07:48:58.982744    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-979859" containerName="kube-controller-manager"
	Jan 11 07:48:59 pause-979859 kubelet[1295]: E0111 07:48:59.596447    1295 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p8c2j" containerName="coredns"
	Jan 11 07:48:59 pause-979859 kubelet[1295]: E0111 07:48:59.606312    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-979859" containerName="etcd"
	Jan 11 07:49:00 pause-979859 kubelet[1295]: E0111 07:49:00.598316    1295 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p8c2j" containerName="coredns"
	Jan 11 07:49:07 pause-979859 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:49:07 pause-979859 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:49:07 pause-979859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 07:49:07 pause-979859 systemd[1]: kubelet.service: Consumed 1.277s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-979859 -n pause-979859
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-979859 -n pause-979859: exit status 2 (374.267331ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-979859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-979859
helpers_test.go:244: (dbg) docker inspect pause-979859:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232",
	        "Created": "2026-01-11T07:48:19.862374935Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 167197,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:48:22.076887163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232/hosts",
	        "LogPath": "/var/lib/docker/containers/2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232/2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232-json.log",
	        "Name": "/pause-979859",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-979859:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-979859",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e11570b3a5f5d1a8285ef841adad93e868a808e5d0cb41dced9634227843232",
	                "LowerDir": "/var/lib/docker/overlay2/9e1f653742904bed45ebe7e37f510af53e4a9cfd01a432978f779ec1d06d2ebb-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e1f653742904bed45ebe7e37f510af53e4a9cfd01a432978f779ec1d06d2ebb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e1f653742904bed45ebe7e37f510af53e4a9cfd01a432978f779ec1d06d2ebb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e1f653742904bed45ebe7e37f510af53e4a9cfd01a432978f779ec1d06d2ebb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-979859",
	                "Source": "/var/lib/docker/volumes/pause-979859/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-979859",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-979859",
	                "name.minikube.sigs.k8s.io": "pause-979859",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c0c58ae20feaafaaf7b56f26fba98030bbf2cb0945249febf541b63ea6a1e802",
	            "SandboxKey": "/var/run/docker/netns/c0c58ae20fea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32963"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-979859": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aecd8d6df546bb374391fc83e2a312baeedd7d04fe0f100e75529f02a50d32b7",
	                    "EndpointID": "a89499b8c8386715216720629093ab971835240b93b0994f7f0289d98752bc34",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "aa:8d:07:0d:f7:3f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-979859",
	                        "2e11570b3a5f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-979859 -n pause-979859
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-979859 -n pause-979859: exit status 2 (385.048042ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-979859 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-369235 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --cancel-scheduled                                                                                              │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:46 UTC │ 11 Jan 26 07:46 UTC │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:47 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:47 UTC │                     │
	│ stop    │ -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:47 UTC │ 11 Jan 26 07:47 UTC │
	│ delete  │ -p scheduled-stop-369235                                                                                                                 │ scheduled-stop-369235       │ jenkins │ v1.37.0 │ 11 Jan 26 07:47 UTC │ 11 Jan 26 07:47 UTC │
	│ start   │ -p insufficient-storage-671147 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-671147 │ jenkins │ v1.37.0 │ 11 Jan 26 07:47 UTC │                     │
	│ delete  │ -p insufficient-storage-671147                                                                                                           │ insufficient-storage-671147 │ jenkins │ v1.37.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:48 UTC │
	│ start   │ -p offline-crio-966080 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-966080         │ jenkins │ v1.37.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:49 UTC │
	│ start   │ -p pause-979859 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-979859                │ jenkins │ v1.37.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:49 UTC │
	│ start   │ -p stopped-upgrade-992502 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-992502      │ jenkins │ v1.35.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:48 UTC │
	│ start   │ -p running-upgrade-010890 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-010890      │ jenkins │ v1.35.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:48 UTC │
	│ stop    │ stopped-upgrade-992502 stop                                                                                                              │ stopped-upgrade-992502      │ jenkins │ v1.35.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:48 UTC │
	│ start   │ -p running-upgrade-010890 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-010890      │ jenkins │ v1.37.0 │ 11 Jan 26 07:48 UTC │ 11 Jan 26 07:49 UTC │
	│ start   │ -p stopped-upgrade-992502 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-992502      │ jenkins │ v1.37.0 │ 11 Jan 26 07:48 UTC │                     │
	│ delete  │ -p offline-crio-966080                                                                                                                   │ offline-crio-966080         │ jenkins │ v1.37.0 │ 11 Jan 26 07:49 UTC │ 11 Jan 26 07:49 UTC │
	│ start   │ -p pause-979859 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-979859                │ jenkins │ v1.37.0 │ 11 Jan 26 07:49 UTC │ 11 Jan 26 07:49 UTC │
	│ start   │ -p kubernetes-upgrade-416144 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-416144   │ jenkins │ v1.37.0 │ 11 Jan 26 07:49 UTC │                     │
	│ pause   │ -p pause-979859 --alsologtostderr -v=5                                                                                                   │ pause-979859                │ jenkins │ v1.37.0 │ 11 Jan 26 07:49 UTC │                     │
	│ delete  │ -p running-upgrade-010890                                                                                                                │ running-upgrade-010890      │ jenkins │ v1.37.0 │ 11 Jan 26 07:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:49:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:49:03.624213  181612 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:49:03.624322  181612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:49:03.624332  181612 out.go:374] Setting ErrFile to fd 2...
	I0111 07:49:03.624336  181612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:49:03.624566  181612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:49:03.625100  181612 out.go:368] Setting JSON to false
	I0111 07:49:03.626272  181612 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1892,"bootTime":1768115852,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:49:03.626327  181612 start.go:143] virtualization: kvm guest
	I0111 07:49:03.628010  181612 out.go:179] * [kubernetes-upgrade-416144] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:49:03.629015  181612 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:49:03.629032  181612 notify.go:221] Checking for updates...
	I0111 07:49:03.631281  181612 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:49:03.632548  181612 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:49:03.633864  181612 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:49:03.634904  181612 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:49:03.635962  181612 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:49:03.637524  181612 config.go:182] Loaded profile config "pause-979859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:49:03.637638  181612 config.go:182] Loaded profile config "running-upgrade-010890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0111 07:49:03.637727  181612 config.go:182] Loaded profile config "stopped-upgrade-992502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0111 07:49:03.637843  181612 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:49:03.664620  181612 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:49:03.664700  181612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:49:03.724183  181612 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2026-01-11 07:49:03.712986258 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:49:03.724276  181612 docker.go:319] overlay module found
	I0111 07:49:03.726052  181612 out.go:179] * Using the docker driver based on user configuration
	I0111 07:49:03.727077  181612 start.go:309] selected driver: docker
	I0111 07:49:03.727091  181612 start.go:928] validating driver "docker" against <nil>
	I0111 07:49:03.727102  181612 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:49:03.727629  181612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:49:03.796250  181612 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2026-01-11 07:49:03.786416716 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:49:03.796435  181612 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:49:03.796677  181612 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 07:49:03.800918  181612 out.go:179] * Using Docker driver with root privileges
	I0111 07:49:03.802136  181612 cni.go:84] Creating CNI manager for ""
	I0111 07:49:03.802218  181612 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:49:03.802233  181612 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 07:49:03.802316  181612 start.go:353] cluster config:
	{Name:kubernetes-upgrade-416144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-416144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:49:03.803590  181612 out.go:179] * Starting "kubernetes-upgrade-416144" primary control-plane node in "kubernetes-upgrade-416144" cluster
	I0111 07:49:03.804469  181612 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:49:03.805497  181612 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:49:03.806523  181612 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 07:49:03.806556  181612 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:49:03.806568  181612 cache.go:65] Caching tarball of preloaded images
	I0111 07:49:03.806566  181612 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:49:03.806657  181612 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:49:03.806674  181612 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0111 07:49:03.806782  181612 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kubernetes-upgrade-416144/config.json ...
	I0111 07:49:03.806818  181612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kubernetes-upgrade-416144/config.json: {Name:mk266c72f0852130996f7bb577db78ce69fe6e67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:49:03.829445  181612 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:49:03.829462  181612 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:49:03.829480  181612 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:49:03.829506  181612 start.go:360] acquireMachinesLock for kubernetes-upgrade-416144: {Name:mk65ce35a8e0a0a28c7a78c6010937ed11fb4751 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:49:03.829590  181612 start.go:364] duration metric: took 70.358µs to acquireMachinesLock for "kubernetes-upgrade-416144"
	I0111 07:49:03.829612  181612 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-416144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-416144 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:49:03.829670  181612 start.go:125] createHost starting for "" (driver="docker")
	I0111 07:49:01.394934  180680 out.go:252] * Updating the running docker "pause-979859" container ...
	I0111 07:49:01.394970  180680 machine.go:94] provisionDockerMachine start ...
	I0111 07:49:01.395050  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:01.413430  180680 main.go:144] libmachine: Using SSH client type: native
	I0111 07:49:01.413783  180680 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0111 07:49:01.413814  180680 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 07:49:01.564562  180680 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-979859
	
	I0111 07:49:01.564590  180680 ubuntu.go:182] provisioning hostname "pause-979859"
	I0111 07:49:01.564750  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:01.590444  180680 main.go:144] libmachine: Using SSH client type: native
	I0111 07:49:01.590706  180680 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0111 07:49:01.590725  180680 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-979859 && echo "pause-979859" | sudo tee /etc/hostname
	I0111 07:49:01.754692  180680 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-979859
	
	I0111 07:49:01.754775  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:01.773775  180680 main.go:144] libmachine: Using SSH client type: native
	I0111 07:49:01.774006  180680 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0111 07:49:01.774022  180680 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-979859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-979859/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-979859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 07:49:01.920529  180680 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 07:49:01.920559  180680 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-4689/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-4689/.minikube}
	I0111 07:49:01.920581  180680 ubuntu.go:190] setting up certificates
	I0111 07:49:01.920594  180680 provision.go:84] configureAuth start
	I0111 07:49:01.920651  180680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-979859
	I0111 07:49:01.938685  180680 provision.go:143] copyHostCerts
	I0111 07:49:01.938745  180680 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem, removing ...
	I0111 07:49:01.938763  180680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem
	I0111 07:49:01.938882  180680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem (1082 bytes)
	I0111 07:49:01.939015  180680 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem, removing ...
	I0111 07:49:01.939027  180680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem
	I0111 07:49:01.939060  180680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem (1123 bytes)
	I0111 07:49:01.939137  180680 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem, removing ...
	I0111 07:49:01.939144  180680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem
	I0111 07:49:01.939170  180680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem (1675 bytes)
	I0111 07:49:01.939241  180680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem org=jenkins.pause-979859 san=[127.0.0.1 192.168.76.2 localhost minikube pause-979859]
	I0111 07:49:01.977564  180680 provision.go:177] copyRemoteCerts
	I0111 07:49:01.977629  180680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 07:49:01.977668  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:01.996781  180680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/pause-979859/id_rsa Username:docker}
	I0111 07:49:02.100090  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 07:49:02.117842  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0111 07:49:02.135364  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 07:49:02.152498  180680 provision.go:87] duration metric: took 231.890168ms to configureAuth
	I0111 07:49:02.152538  180680 ubuntu.go:206] setting minikube options for container-runtime
	I0111 07:49:02.152754  180680 config.go:182] Loaded profile config "pause-979859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:49:02.152895  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:02.171362  180680 main.go:144] libmachine: Using SSH client type: native
	I0111 07:49:02.171569  180680 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0111 07:49:02.171585  180680 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 07:49:02.509461  180680 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 07:49:02.509491  180680 machine.go:97] duration metric: took 1.11451225s to provisionDockerMachine
	I0111 07:49:02.509506  180680 start.go:293] postStartSetup for "pause-979859" (driver="docker")
	I0111 07:49:02.509548  180680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 07:49:02.509636  180680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 07:49:02.509699  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:02.528982  180680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/pause-979859/id_rsa Username:docker}
	I0111 07:49:02.635679  180680 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 07:49:02.639526  180680 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 07:49:02.639556  180680 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 07:49:02.639568  180680 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/addons for local assets ...
	I0111 07:49:02.639620  180680 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/files for local assets ...
	I0111 07:49:02.639723  180680 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem -> 82382.pem in /etc/ssl/certs
	I0111 07:49:02.639871  180680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 07:49:02.647976  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:49:02.665643  180680 start.go:296] duration metric: took 156.123256ms for postStartSetup
	I0111 07:49:02.665707  180680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:49:02.665752  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:02.687499  180680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/pause-979859/id_rsa Username:docker}
	I0111 07:49:02.787792  180680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 07:49:02.792682  180680 fix.go:56] duration metric: took 1.42052414s for fixHost
	I0111 07:49:02.792703  180680 start.go:83] releasing machines lock for "pause-979859", held for 1.420591868s
	I0111 07:49:02.792760  180680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-979859
	I0111 07:49:02.812130  180680 ssh_runner.go:195] Run: cat /version.json
	I0111 07:49:02.812197  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:02.812217  180680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 07:49:02.812292  180680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-979859
	I0111 07:49:02.832640  180680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/pause-979859/id_rsa Username:docker}
	I0111 07:49:02.832833  180680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/pause-979859/id_rsa Username:docker}
	I0111 07:49:02.990912  180680 ssh_runner.go:195] Run: systemctl --version
	I0111 07:49:02.998356  180680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 07:49:03.036713  180680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 07:49:03.042045  180680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 07:49:03.042133  180680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 07:49:03.052365  180680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 07:49:03.052388  180680 start.go:496] detecting cgroup driver to use...
	I0111 07:49:03.052429  180680 detect.go:178] detected "systemd" cgroup driver on host os
	I0111 07:49:03.052485  180680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 07:49:03.068968  180680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 07:49:03.081610  180680 docker.go:218] disabling cri-docker service (if available) ...
	I0111 07:49:03.081655  180680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 07:49:03.096771  180680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 07:49:03.109148  180680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 07:49:03.218738  180680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 07:49:03.331908  180680 docker.go:234] disabling docker service ...
	I0111 07:49:03.331964  180680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 07:49:03.349832  180680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 07:49:03.363608  180680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 07:49:03.475027  180680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 07:49:03.595043  180680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 07:49:03.608692  180680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 07:49:03.623517  180680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 07:49:03.623572  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.633308  180680 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 07:49:03.633376  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.642990  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.653084  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.663076  180680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 07:49:03.671251  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.680932  180680 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.693019  180680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:49:03.702579  180680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 07:49:03.710855  180680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 07:49:03.718871  180680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:49:03.842922  180680 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 07:49:04.040722  180680 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 07:49:04.040826  180680 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 07:49:04.045145  180680 start.go:574] Will wait 60s for crictl version
	I0111 07:49:04.045204  180680 ssh_runner.go:195] Run: which crictl
	I0111 07:49:04.049467  180680 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 07:49:04.077825  180680 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 07:49:04.077929  180680 ssh_runner.go:195] Run: crio --version
	I0111 07:49:04.110341  180680 ssh_runner.go:195] Run: crio --version
	I0111 07:49:04.138678  180680 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 07:49:04.139874  180680 cli_runner.go:164] Run: docker network inspect pause-979859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:49:04.159669  180680 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 07:49:04.164593  180680 kubeadm.go:884] updating cluster {Name:pause-979859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-979859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 07:49:04.164780  180680 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:49:04.164864  180680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:49:04.200441  180680 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:49:04.200466  180680 crio.go:433] Images already preloaded, skipping extraction
	I0111 07:49:04.200520  180680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:49:04.225929  180680 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:49:04.225952  180680 cache_images.go:86] Images are preloaded, skipping loading
	I0111 07:49:04.225959  180680 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0111 07:49:04.226060  180680 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-979859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-979859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 07:49:04.226146  180680 ssh_runner.go:195] Run: crio config
	I0111 07:49:04.286907  180680 cni.go:84] Creating CNI manager for ""
	I0111 07:49:04.286936  180680 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:49:04.286956  180680 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 07:49:04.286988  180680 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-979859 NodeName:pause-979859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 07:49:04.287170  180680 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-979859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 07:49:04.287248  180680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 07:49:04.296480  180680 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 07:49:04.296535  180680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 07:49:04.304826  180680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0111 07:49:04.319131  180680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 07:49:04.333602  180680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I0111 07:49:04.366509  180680 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 07:49:04.371465  180680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:49:04.510685  180680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:49:04.524699  180680 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859 for IP: 192.168.76.2
	I0111 07:49:04.524718  180680 certs.go:195] generating shared ca certs ...
	I0111 07:49:04.524732  180680 certs.go:227] acquiring lock for ca certs: {Name:mk9f7a57f61b85f2c0f52e44b6a926769e368f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:49:04.524886  180680 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key
	I0111 07:49:04.524924  180680 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key
	I0111 07:49:04.524931  180680 certs.go:257] generating profile certs ...
	I0111 07:49:04.525021  180680 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/client.key
	I0111 07:49:04.525062  180680 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/apiserver.key.a6d7a2a3
	I0111 07:49:04.525094  180680 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/proxy-client.key
	I0111 07:49:04.525194  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem (1338 bytes)
	W0111 07:49:04.525235  180680 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238_empty.pem, impossibly tiny 0 bytes
	I0111 07:49:04.525246  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 07:49:04.525275  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem (1082 bytes)
	I0111 07:49:04.525295  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem (1123 bytes)
	I0111 07:49:04.525314  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem (1675 bytes)
	I0111 07:49:04.525351  180680 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:49:04.526026  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 07:49:04.549749  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 07:49:04.577332  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 07:49:04.597205  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 07:49:04.618605  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0111 07:49:04.639546  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 07:49:04.659515  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 07:49:04.677954  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 07:49:04.699050  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 07:49:04.717753  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem --> /usr/share/ca-certificates/8238.pem (1338 bytes)
	I0111 07:49:04.735791  180680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /usr/share/ca-certificates/82382.pem (1708 bytes)
	I0111 07:49:04.754299  180680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 07:49:04.771646  180680 ssh_runner.go:195] Run: openssl version
	I0111 07:49:04.777982  180680 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:49:04.785836  180680 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 07:49:04.793646  180680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:49:04.797952  180680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:23 /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:49:04.798013  180680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:49:04.834610  180680 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 07:49:04.842832  180680 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8238.pem
	I0111 07:49:04.851020  180680 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8238.pem /etc/ssl/certs/8238.pem
	I0111 07:49:04.859685  180680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8238.pem
	I0111 07:49:04.863632  180680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:27 /usr/share/ca-certificates/8238.pem
	I0111 07:49:04.863694  180680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8238.pem
	I0111 07:49:04.901965  180680 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 07:49:04.910700  180680 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/82382.pem
	I0111 07:49:04.918601  180680 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/82382.pem /etc/ssl/certs/82382.pem
	I0111 07:49:04.926258  180680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82382.pem
	I0111 07:49:04.930066  180680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:27 /usr/share/ca-certificates/82382.pem
	I0111 07:49:04.930131  180680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82382.pem
	I0111 07:49:04.969549  180680 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 07:49:04.977983  180680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 07:49:04.982225  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 07:49:05.019102  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 07:49:05.054090  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 07:49:05.089884  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 07:49:05.135289  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 07:49:05.171131  180680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 07:49:05.217191  180680 kubeadm.go:401] StartCluster: {Name:pause-979859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-979859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:49:05.217298  180680 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:49:05.217369  180680 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:49:05.248764  180680 cri.go:96] found id: "fb3d48a5461e3d903bd4f6a53010691c4575384f0fe1f9858577740063664cfc"
	I0111 07:49:05.248792  180680 cri.go:96] found id: "03c0e211978ecf5f182769c51bc9eacba756e284410fadbd73e155decf990fec"
	I0111 07:49:05.248821  180680 cri.go:96] found id: "39df3868e30b956ed7b402910c52eb3d3f645eff78410c5ced0e05e07cfb2605"
	I0111 07:49:05.248828  180680 cri.go:96] found id: "4cbe4980b4381d3cedbd6d8e5865265d8171d37f8ceeeece80e97c768b8baca8"
	I0111 07:49:05.248834  180680 cri.go:96] found id: "87587bcd281f1c0a2aef860698f2ac71a9f66c8ad72bd9d536d718035eb38bbc"
	I0111 07:49:05.248840  180680 cri.go:96] found id: "697756342f1b7fa4f1be176804f26aeac8f5640dc6a8e9956e55c932c1ec1e0c"
	I0111 07:49:05.248846  180680 cri.go:96] found id: "8cefc332f1bb940a084c44f74130ed4c2b329c7fb9503f73c7ec6a9e051e9362"
	I0111 07:49:05.248852  180680 cri.go:96] found id: ""
	I0111 07:49:05.248916  180680 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 07:49:05.261644  180680 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:49:05Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:49:05.261761  180680 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 07:49:05.270416  180680 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 07:49:05.270435  180680 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 07:49:05.270496  180680 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 07:49:05.278050  180680 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:49:05.278720  180680 kubeconfig.go:125] found "pause-979859" server: "https://192.168.76.2:8443"
	I0111 07:49:05.279651  180680 kapi.go:59] client config for pause-979859: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/client.crt", KeyFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/client.key", CAFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0111 07:49:05.280102  180680 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0111 07:49:05.280118  180680 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0111 07:49:05.280123  180680 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0111 07:49:05.280126  180680 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0111 07:49:05.280130  180680 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0111 07:49:05.280138  180680 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0111 07:49:05.280454  180680 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 07:49:05.289478  180680 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0111 07:49:05.289507  180680 kubeadm.go:602] duration metric: took 19.066744ms to restartPrimaryControlPlane
	I0111 07:49:05.289514  180680 kubeadm.go:403] duration metric: took 72.342455ms to StartCluster
	I0111 07:49:05.289528  180680 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:49:05.289604  180680 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:49:05.291617  180680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:49:05.292427  180680 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:49:05.292512  180680 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:49:05.292577  180680 config.go:182] Loaded profile config "pause-979859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:49:05.298499  180680 out.go:179] * Enabled addons: 
	I0111 07:49:05.298500  180680 out.go:179] * Verifying Kubernetes components...
	I0111 07:49:04.644887  177360 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0111 07:49:04.644951  177360 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:49:05.299859  180680 addons.go:530] duration metric: took 7.354777ms for enable addons: enabled=[]
	I0111 07:49:05.299884  180680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:49:05.412089  180680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:49:05.425138  180680 node_ready.go:35] waiting up to 6m0s for node "pause-979859" to be "Ready" ...
	I0111 07:49:05.432713  180680 node_ready.go:49] node "pause-979859" is "Ready"
	I0111 07:49:05.432741  180680 node_ready.go:38] duration metric: took 7.552152ms for node "pause-979859" to be "Ready" ...
	I0111 07:49:05.432757  180680 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:49:05.432830  180680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:49:05.444569  180680 api_server.go:72] duration metric: took 152.103738ms to wait for apiserver process to appear ...
	I0111 07:49:05.444614  180680 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:49:05.444636  180680 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 07:49:05.448726  180680 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0111 07:49:05.449577  180680 api_server.go:141] control plane version: v1.35.0
	I0111 07:49:05.449599  180680 api_server.go:131] duration metric: took 4.976043ms to wait for apiserver health ...
	I0111 07:49:05.449607  180680 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:49:05.452775  180680 system_pods.go:59] 7 kube-system pods found
	I0111 07:49:05.452832  180680 system_pods.go:61] "coredns-7d764666f9-p8c2j" [aa1e3126-8605-4b65-8bd5-601f8a9e4971] Running
	I0111 07:49:05.452844  180680 system_pods.go:61] "etcd-pause-979859" [fc68f4d1-8c13-4bc8-9b74-65ff40bb8de1] Running
	I0111 07:49:05.452851  180680 system_pods.go:61] "kindnet-8zgsx" [92894192-3035-436e-bad2-126e9c157984] Running
	I0111 07:49:05.452858  180680 system_pods.go:61] "kube-apiserver-pause-979859" [209a55cc-a23c-4bb8-bb8f-48375d938576] Running
	I0111 07:49:05.452865  180680 system_pods.go:61] "kube-controller-manager-pause-979859" [5120c914-5c48-4c18-a6b3-812d88e8ea84] Running
	I0111 07:49:05.452874  180680 system_pods.go:61] "kube-proxy-99tf8" [92494e6a-e314-4b9c-93d4-5a15fe8e759f] Running
	I0111 07:49:05.452879  180680 system_pods.go:61] "kube-scheduler-pause-979859" [2970e557-ea1b-4466-a213-954e07b90bec] Running
	I0111 07:49:05.452887  180680 system_pods.go:74] duration metric: took 3.274456ms to wait for pod list to return data ...
	I0111 07:49:05.452900  180680 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:49:05.454785  180680 default_sa.go:45] found service account: "default"
	I0111 07:49:05.454833  180680 default_sa.go:55] duration metric: took 1.924957ms for default service account to be created ...
	I0111 07:49:05.454843  180680 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:49:05.457375  180680 system_pods.go:86] 7 kube-system pods found
	I0111 07:49:05.457407  180680 system_pods.go:89] "coredns-7d764666f9-p8c2j" [aa1e3126-8605-4b65-8bd5-601f8a9e4971] Running
	I0111 07:49:05.457414  180680 system_pods.go:89] "etcd-pause-979859" [fc68f4d1-8c13-4bc8-9b74-65ff40bb8de1] Running
	I0111 07:49:05.457417  180680 system_pods.go:89] "kindnet-8zgsx" [92894192-3035-436e-bad2-126e9c157984] Running
	I0111 07:49:05.457421  180680 system_pods.go:89] "kube-apiserver-pause-979859" [209a55cc-a23c-4bb8-bb8f-48375d938576] Running
	I0111 07:49:05.457425  180680 system_pods.go:89] "kube-controller-manager-pause-979859" [5120c914-5c48-4c18-a6b3-812d88e8ea84] Running
	I0111 07:49:05.457429  180680 system_pods.go:89] "kube-proxy-99tf8" [92494e6a-e314-4b9c-93d4-5a15fe8e759f] Running
	I0111 07:49:05.457432  180680 system_pods.go:89] "kube-scheduler-pause-979859" [2970e557-ea1b-4466-a213-954e07b90bec] Running
	I0111 07:49:05.457438  180680 system_pods.go:126] duration metric: took 2.589106ms to wait for k8s-apps to be running ...
	I0111 07:49:05.457447  180680 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:49:05.457486  180680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:49:05.471475  180680 system_svc.go:56] duration metric: took 14.019648ms WaitForService to wait for kubelet
	I0111 07:49:05.471502  180680 kubeadm.go:587] duration metric: took 179.040488ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:49:05.471523  180680 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:49:05.474478  180680 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:49:05.474509  180680 node_conditions.go:123] node cpu capacity is 8
	I0111 07:49:05.474527  180680 node_conditions.go:105] duration metric: took 2.99888ms to run NodePressure ...
	I0111 07:49:05.474545  180680 start.go:242] waiting for startup goroutines ...
	I0111 07:49:05.474556  180680 start.go:247] waiting for cluster config update ...
	I0111 07:49:05.474568  180680 start.go:256] writing updated cluster config ...
	I0111 07:49:05.474920  180680 ssh_runner.go:195] Run: rm -f paused
	I0111 07:49:05.479259  180680 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:49:05.480081  180680 kapi.go:59] client config for pause-979859: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/client.crt", KeyFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/profiles/pause-979859/client.key", CAFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0111 07:49:05.483064  180680 pod_ready.go:83] waiting for pod "coredns-7d764666f9-p8c2j" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.486928  180680 pod_ready.go:94] pod "coredns-7d764666f9-p8c2j" is "Ready"
	I0111 07:49:05.486947  180680 pod_ready.go:86] duration metric: took 3.862051ms for pod "coredns-7d764666f9-p8c2j" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.488880  180680 pod_ready.go:83] waiting for pod "etcd-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.492616  180680 pod_ready.go:94] pod "etcd-pause-979859" is "Ready"
	I0111 07:49:05.492635  180680 pod_ready.go:86] duration metric: took 3.730305ms for pod "etcd-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.494363  180680 pod_ready.go:83] waiting for pod "kube-apiserver-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.498226  180680 pod_ready.go:94] pod "kube-apiserver-pause-979859" is "Ready"
	I0111 07:49:05.498249  180680 pod_ready.go:86] duration metric: took 3.862098ms for pod "kube-apiserver-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.500104  180680 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:05.883657  180680 pod_ready.go:94] pod "kube-controller-manager-pause-979859" is "Ready"
	I0111 07:49:05.883690  180680 pod_ready.go:86] duration metric: took 383.562542ms for pod "kube-controller-manager-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:06.083735  180680 pod_ready.go:83] waiting for pod "kube-proxy-99tf8" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:06.484037  180680 pod_ready.go:94] pod "kube-proxy-99tf8" is "Ready"
	I0111 07:49:06.484068  180680 pod_ready.go:86] duration metric: took 400.301845ms for pod "kube-proxy-99tf8" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:06.684196  180680 pod_ready.go:83] waiting for pod "kube-scheduler-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:07.083551  180680 pod_ready.go:94] pod "kube-scheduler-pause-979859" is "Ready"
	I0111 07:49:07.083582  180680 pod_ready.go:86] duration metric: took 399.355447ms for pod "kube-scheduler-pause-979859" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:49:07.083598  180680 pod_ready.go:40] duration metric: took 1.604309709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:49:07.128819  180680 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0111 07:49:07.166003  180680 out.go:179] * Done! kubectl is now configured to use "pause-979859" cluster and "default" namespace by default
	I0111 07:49:03.831362  181612 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 07:49:03.831568  181612 start.go:159] libmachine.API.Create for "kubernetes-upgrade-416144" (driver="docker")
	I0111 07:49:03.831593  181612 client.go:173] LocalClient.Create starting
	I0111 07:49:03.831650  181612 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem
	I0111 07:49:03.831692  181612 main.go:144] libmachine: Decoding PEM data...
	I0111 07:49:03.831716  181612 main.go:144] libmachine: Parsing certificate...
	I0111 07:49:03.831767  181612 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem
	I0111 07:49:03.831790  181612 main.go:144] libmachine: Decoding PEM data...
	I0111 07:49:03.831852  181612 main.go:144] libmachine: Parsing certificate...
	I0111 07:49:03.832179  181612 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-416144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 07:49:03.850010  181612 cli_runner.go:211] docker network inspect kubernetes-upgrade-416144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 07:49:03.850079  181612 network_create.go:284] running [docker network inspect kubernetes-upgrade-416144] to gather additional debugging logs...
	I0111 07:49:03.850101  181612 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-416144
	W0111 07:49:03.868349  181612 cli_runner.go:211] docker network inspect kubernetes-upgrade-416144 returned with exit code 1
	I0111 07:49:03.868376  181612 network_create.go:287] error running [docker network inspect kubernetes-upgrade-416144]: docker network inspect kubernetes-upgrade-416144: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-416144 not found
	I0111 07:49:03.868390  181612 network_create.go:289] output of [docker network inspect kubernetes-upgrade-416144]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-416144 not found
	
	** /stderr **
	I0111 07:49:03.868501  181612 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:49:03.886928  181612 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-97182e010def IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7c:25:92:4b:a6} reservation:<nil>}
	I0111 07:49:03.887434  181612 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2c82d96e68c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:e5:3c:51:ff:2e} reservation:<nil>}
	I0111 07:49:03.887933  181612 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a487bdecf87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:a6:1b:9d:11:3d} reservation:<nil>}
	I0111 07:49:03.888436  181612 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-aecd8d6df546 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:b2:1b:fc:20:f2} reservation:<nil>}
	I0111 07:49:03.889181  181612 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f184d0}
	I0111 07:49:03.889212  181612 network_create.go:124] attempt to create docker network kubernetes-upgrade-416144 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 07:49:03.889260  181612 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-416144 kubernetes-upgrade-416144
	I0111 07:49:03.941031  181612 network_create.go:108] docker network kubernetes-upgrade-416144 192.168.85.0/24 created
	I0111 07:49:03.941067  181612 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-416144" container
	I0111 07:49:03.941142  181612 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 07:49:03.961707  181612 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-416144 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-416144 --label created_by.minikube.sigs.k8s.io=true
	I0111 07:49:03.981716  181612 oci.go:103] Successfully created a docker volume kubernetes-upgrade-416144
	I0111 07:49:03.981811  181612 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-416144-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-416144 --entrypoint /usr/bin/test -v kubernetes-upgrade-416144:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 07:49:04.414513  181612 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-416144
	I0111 07:49:04.414591  181612 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 07:49:04.414604  181612 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 07:49:04.415050  181612 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-416144:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 07:49:04.545976  176771 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 914d4aed25cfd45075612ec9d7ddf868d011974f7b31e21b7c1cdc35d471cea8 f3c9ba4f3e5cd8428df44dbe59d03852f25f3837b76d9203e6ecb23e7a6f2c3e 3131a5f033877e6f8988925bfe183633dc80a2dab8eae1183d75736367901136 3c7413176aaff01341bb76bb3ad734f63709042b8053fad4b72da3195eee13bc bdf6fe4113ab43d130044ff170b2c826b642fa3e776794c980ed4b74ff71fd47 31ab284fa880a55b0c56d9021f3e815e22329e6bb89ba430c5d158b67e01dfbc 97f478d699abfe3535adb73be64057a384396a9b1e90a1dbe1f564a57c953751: (11.026668496s)
	I0111 07:49:04.546055  176771 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0111 07:49:04.594222  176771 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 07:49:04.607476  176771 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5651 Jan 11 07:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan 11 07:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jan 11 07:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 11 07:48 /etc/kubernetes/scheduler.conf
	
	I0111 07:49:04.607540  176771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 07:49:04.620642  176771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 07:49:04.632603  176771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 07:49:04.644614  176771 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:49:04.644678  176771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 07:49:04.654333  176771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 07:49:04.665839  176771 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:49:04.665902  176771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 07:49:04.676054  176771 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 07:49:04.687947  176771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 07:49:04.735502  176771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 07:49:06.060991  176771 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.32544916s)
	I0111 07:49:06.061063  176771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0111 07:49:06.279746  176771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 07:49:06.334525  176771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0111 07:49:06.396058  176771 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:49:06.396128  176771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:49:06.896995  176771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:49:07.397029  176771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:49:07.412942  176771 api_server.go:72] duration metric: took 1.01689379s to wait for apiserver process to appear ...
	I0111 07:49:07.412966  176771 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:49:07.412996  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:08.370636  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 07:49:08.370665  176771 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 07:49:08.370680  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:08.398856  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 07:49:08.398909  176771 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 07:49:08.413048  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:08.718873  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:49:08.718915  176771 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:49:08.913305  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:08.917290  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:49:08.917322  176771 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:49:09.413950  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:09.551351  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:49:09.551382  176771 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:49:09.913895  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:09.918558  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0111 07:49:09.924700  176771 api_server.go:141] control plane version: v1.32.0
	I0111 07:49:09.924731  176771 api_server.go:131] duration metric: took 2.511757337s to wait for apiserver health ...
	I0111 07:49:09.924743  176771 cni.go:84] Creating CNI manager for ""
	I0111 07:49:09.924751  176771 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:49:09.926784  176771 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 07:49:09.646886  177360 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0111 07:49:09.646936  177360 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:49:09.929407  176771 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 07:49:09.933950  176771 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0111 07:49:09.933977  176771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 07:49:09.955840  176771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 07:49:10.402334  176771 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:49:10.406569  176771 system_pods.go:59] 8 kube-system pods found
	I0111 07:49:10.406619  176771 system_pods.go:61] "coredns-668d6bf9bc-8smvl" [d66b70e0-1cd3-46e2-b979-6306ae403a02] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:49:10.406633  176771 system_pods.go:61] "etcd-running-upgrade-010890" [787bb0ca-cd24-41ef-b1a9-6eeb9a822699] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:49:10.406650  176771 system_pods.go:61] "kindnet-vxw9r" [1b5bb60c-fd84-45ff-9138-c228fb17e318] Running
	I0111 07:49:10.406662  176771 system_pods.go:61] "kube-apiserver-running-upgrade-010890" [4b58a09e-6115-402f-976a-0ab9c5f2d2b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:49:10.406671  176771 system_pods.go:61] "kube-controller-manager-running-upgrade-010890" [a4d978f5-e922-4f49-93f4-5cab31a88dce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:49:10.406682  176771 system_pods.go:61] "kube-proxy-jvnm2" [80fc0fae-453f-41b0-b894-9cd23573e82d] Running
	I0111 07:49:10.406694  176771 system_pods.go:61] "kube-scheduler-running-upgrade-010890" [5769290a-d71d-4b9c-b3aa-307c52d471a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:49:10.406700  176771 system_pods.go:61] "storage-provisioner" [66c93caa-80b6-4843-97ce-6cf76e5c249b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:49:10.406708  176771 system_pods.go:74] duration metric: took 4.352714ms to wait for pod list to return data ...
	I0111 07:49:10.406716  176771 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:49:10.409087  176771 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:49:10.409111  176771 node_conditions.go:123] node cpu capacity is 8
	I0111 07:49:10.409123  176771 node_conditions.go:105] duration metric: took 2.402626ms to run NodePressure ...
	I0111 07:49:10.409174  176771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 07:49:10.680488  176771 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 07:49:10.688909  176771 ops.go:34] apiserver oom_adj: -16
	I0111 07:49:10.688933  176771 kubeadm.go:602] duration metric: took 17.231018568s to restartPrimaryControlPlane
	I0111 07:49:10.688946  176771 kubeadm.go:403] duration metric: took 17.298937599s to StartCluster
	I0111 07:49:10.688967  176771 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:49:10.689039  176771 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:49:10.690343  176771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:49:10.690573  176771 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:49:10.690645  176771 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:49:10.690754  176771 addons.go:70] Setting storage-provisioner=true in profile "running-upgrade-010890"
	I0111 07:49:10.690778  176771 addons.go:239] Setting addon storage-provisioner=true in "running-upgrade-010890"
	W0111 07:49:10.690787  176771 addons.go:248] addon storage-provisioner should already be in state true
	I0111 07:49:10.690778  176771 addons.go:70] Setting default-storageclass=true in profile "running-upgrade-010890"
	I0111 07:49:10.690817  176771 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-010890"
	I0111 07:49:10.690848  176771 host.go:66] Checking if "running-upgrade-010890" exists ...
	I0111 07:49:10.690860  176771 config.go:182] Loaded profile config "running-upgrade-010890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0111 07:49:10.691167  176771 cli_runner.go:164] Run: docker container inspect running-upgrade-010890 --format={{.State.Status}}
	I0111 07:49:10.691347  176771 cli_runner.go:164] Run: docker container inspect running-upgrade-010890 --format={{.State.Status}}
	I0111 07:49:10.694937  176771 out.go:179] * Verifying Kubernetes components...
	I0111 07:49:10.696273  176771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:49:10.715604  176771 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:49:10.715991  176771 kapi.go:59] client config for running-upgrade-010890: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/profiles/running-upgrade-010890/client.crt", KeyFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/profiles/running-upgrade-010890/client.key", CAFile:"/home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0111 07:49:10.716325  176771 addons.go:239] Setting addon default-storageclass=true in "running-upgrade-010890"
	W0111 07:49:10.716346  176771 addons.go:248] addon default-storageclass should already be in state true
	I0111 07:49:10.716373  176771 host.go:66] Checking if "running-upgrade-010890" exists ...
	I0111 07:49:10.716711  176771 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:49:10.716727  176771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:49:10.716774  176771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-010890
	I0111 07:49:10.716865  176771 cli_runner.go:164] Run: docker container inspect running-upgrade-010890 --format={{.State.Status}}
	I0111 07:49:10.743419  176771 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:49:10.743450  176771 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:49:10.743617  176771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-010890
	I0111 07:49:10.750183  176771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/running-upgrade-010890/id_rsa Username:docker}
	I0111 07:49:10.771598  176771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/running-upgrade-010890/id_rsa Username:docker}
	I0111 07:49:10.843456  176771 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:49:10.862869  176771 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:49:10.862945  176771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:49:10.868601  176771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:49:10.876338  176771 api_server.go:72] duration metric: took 185.732086ms to wait for apiserver process to appear ...
	I0111 07:49:10.876369  176771 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:49:10.876389  176771 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0111 07:49:10.881683  176771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:49:10.882503  176771 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0111 07:49:10.884141  176771 api_server.go:141] control plane version: v1.32.0
	I0111 07:49:10.884168  176771 api_server.go:131] duration metric: took 7.784141ms to wait for apiserver health ...
	I0111 07:49:10.884178  176771 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:49:10.887421  176771 system_pods.go:59] 8 kube-system pods found
	I0111 07:49:10.887458  176771 system_pods.go:61] "coredns-668d6bf9bc-8smvl" [d66b70e0-1cd3-46e2-b979-6306ae403a02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:49:10.887470  176771 system_pods.go:61] "etcd-running-upgrade-010890" [787bb0ca-cd24-41ef-b1a9-6eeb9a822699] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:49:10.887486  176771 system_pods.go:61] "kindnet-vxw9r" [1b5bb60c-fd84-45ff-9138-c228fb17e318] Running
	I0111 07:49:10.887500  176771 system_pods.go:61] "kube-apiserver-running-upgrade-010890" [4b58a09e-6115-402f-976a-0ab9c5f2d2b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:49:10.887508  176771 system_pods.go:61] "kube-controller-manager-running-upgrade-010890" [a4d978f5-e922-4f49-93f4-5cab31a88dce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:49:10.887515  176771 system_pods.go:61] "kube-proxy-jvnm2" [80fc0fae-453f-41b0-b894-9cd23573e82d] Running
	I0111 07:49:10.887523  176771 system_pods.go:61] "kube-scheduler-running-upgrade-010890" [5769290a-d71d-4b9c-b3aa-307c52d471a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:49:10.887528  176771 system_pods.go:61] "storage-provisioner" [66c93caa-80b6-4843-97ce-6cf76e5c249b] Running
	I0111 07:49:10.887536  176771 system_pods.go:74] duration metric: took 3.350674ms to wait for pod list to return data ...
	I0111 07:49:10.887551  176771 kubeadm.go:587] duration metric: took 196.947136ms to wait for: map[apiserver:true system_pods:true]
	I0111 07:49:10.887565  176771 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:49:10.890502  176771 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:49:10.890524  176771 node_conditions.go:123] node cpu capacity is 8
	I0111 07:49:10.890538  176771 node_conditions.go:105] duration metric: took 2.963048ms to run NodePressure ...
	I0111 07:49:10.890554  176771 start.go:242] waiting for startup goroutines ...
	I0111 07:49:11.420337  176771 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0111 07:49:11.421693  176771 addons.go:530] duration metric: took 731.053002ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0111 07:49:11.421743  176771 start.go:247] waiting for cluster config update ...
	I0111 07:49:11.421763  176771 start.go:256] writing updated cluster config ...
	I0111 07:49:11.422024  176771 ssh_runner.go:195] Run: rm -f paused
	I0111 07:49:11.471768  176771 start.go:625] kubectl: 1.35.0, cluster: 1.32.0 (minor skew: 3)
	I0111 07:49:11.473682  176771 out.go:203] 
	W0111 07:49:11.475020  176771 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.32.0.
	I0111 07:49:11.476376  176771 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I0111 07:49:11.477661  176771 out.go:179] * Done! kubectl is now configured to use "running-upgrade-010890" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.947071242Z" level=info msg="RDT not available in the host system"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.947086484Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.947904758Z" level=info msg="Conmon does support the --sync option"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.947922177Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.947934587Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.948742288Z" level=info msg="Conmon does support the --sync option"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.948757208Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.953878688Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.953898913Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.954498348Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"en
forcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [cri
o.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.954881137Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Jan 11 07:49:03 pause-979859 crio[2198]: time="2026-01-11T07:49:03.954940217Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.033901204Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-p8c2j Namespace:kube-system ID:aa6d3b81020f919ed00e576c65ccc2028eae17b1de235a3acbf5b984f13481ec UID:aa1e3126-8605-4b65-8bd5-601f8a9e4971 NetNS:/var/run/netns/d235702f-1f38-4abc-bd99-a56ecc8aa9d7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00043c268}] Aliases:map[]}"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034145319Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-p8c2j for CNI network kindnet (type=ptp)"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034580158Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034605064Z" level=info msg="Starting seccomp notifier watcher"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034665145Z" level=info msg="Create NRI interface"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034786125Z" level=info msg="built-in NRI default validator is disabled"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034811473Z" level=info msg="runtime interface created"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034829582Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034844545Z" level=info msg="runtime interface starting up..."
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034851498Z" level=info msg="starting plugins..."
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.034866641Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 11 07:49:04 pause-979859 crio[2198]: time="2026-01-11T07:49:04.035253688Z" level=info msg="No systemd watchdog enabled"
	Jan 11 07:49:04 pause-979859 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	fb3d48a5461e3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     14 seconds ago      Running             coredns                   0                   aa6d3b81020f9       coredns-7d764666f9-p8c2j               kube-system
	03c0e211978ec       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   25 seconds ago      Running             kindnet-cni               0                   6d4f5c02f660a       kindnet-8zgsx                          kube-system
	39df3868e30b9       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     27 seconds ago      Running             kube-proxy                0                   90ad97c6222ef       kube-proxy-99tf8                       kube-system
	4cbe4980b4381       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     37 seconds ago      Running             kube-scheduler            0                   965d6007f2903       kube-scheduler-pause-979859            kube-system
	87587bcd281f1       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     37 seconds ago      Running             kube-apiserver            0                   7f1b33272110f       kube-apiserver-pause-979859            kube-system
	697756342f1b7       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     37 seconds ago      Running             etcd                      0                   ad023adbc68b5       etcd-pause-979859                      kube-system
	8cefc332f1bb9       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     37 seconds ago      Running             kube-controller-manager   0                   2572503136cee       kube-controller-manager-pause-979859   kube-system
	
	
	==> coredns [fb3d48a5461e3d903bd4f6a53010691c4575384f0fe1f9858577740063664cfc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:57441 - 17508 "HINFO IN 3055222049047057707.2413179664257208072. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.420065937s
	
	
	==> describe nodes <==
	Name:               pause-979859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-979859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=pause-979859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_48_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:48:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-979859
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:49:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:48:57 +0000   Sun, 11 Jan 2026 07:48:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:48:57 +0000   Sun, 11 Jan 2026 07:48:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:48:57 +0000   Sun, 11 Jan 2026 07:48:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:48:57 +0000   Sun, 11 Jan 2026 07:48:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-979859
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                1c09a476-6e50-457b-bc42-6b9d3757c9af
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-p8c2j                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-979859                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-8zgsx                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-979859             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-979859    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-99tf8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-979859             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node pause-979859 event: Registered Node pause-979859 in Controller
	
	
	==> dmesg <==
	[Jan11 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001873] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.377362] i8042: Warning: Keylock active
	[  +0.013119] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.487024] block sda: the capability attribute has been deprecated.
	[  +0.081163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022550] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.654171] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [697756342f1b7fa4f1be176804f26aeac8f5640dc6a8e9956e55c932c1ec1e0c] <==
	{"level":"info","ts":"2026-01-11T07:48:35.871069Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:48:35.871087Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:48:35.871784Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T07:48:35.871822Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:48:35.871841Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-11T07:48:35.871850Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T07:48:35.872473Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:48:35.872988Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:48:35.872983Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-979859 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:48:35.873028Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:48:35.873224Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:48:35.873244Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:48:35.873415Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:48:35.873511Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:48:35.873538Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:48:35.873572Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T07:48:35.873656Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T07:48:35.874301Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:48:35.874295Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:48:35.878377Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:48:35.878592Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-11T07:48:49.748249Z","caller":"traceutil/trace.go:172","msg":"trace[1693887841] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"127.692339ms","start":"2026-01-11T07:48:49.620531Z","end":"2026-01-11T07:48:49.748223Z","steps":["trace[1693887841] 'process raft request'  (duration: 102.968724ms)","trace[1693887841] 'compare'  (duration: 24.605351ms)"],"step_count":2}
	{"level":"error","ts":"2026-01-11T07:49:07.801988Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: context canceled\n[+]non_learner ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2294\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2822\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3301\nnet/http.(*conn).serve\n\tnet/http/server.go:2102"}
	{"level":"info","ts":"2026-01-11T07:49:07.918708Z","caller":"traceutil/trace.go:172","msg":"trace[907106383] linearizableReadLoop","detail":"{readStateIndex:474; appliedIndex:474; }","duration":"148.441975ms","start":"2026-01-11T07:49:07.770242Z","end":"2026-01-11T07:49:07.918684Z","steps":["trace[907106383] 'read index received'  (duration: 148.434172ms)","trace[907106383] 'applied index is now lower than readState.Index'  (duration: 6.746µs)"],"step_count":2}
	{"level":"info","ts":"2026-01-11T07:49:07.918871Z","caller":"traceutil/trace.go:172","msg":"trace[1713011793] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"181.193079ms","start":"2026-01-11T07:49:07.737663Z","end":"2026-01-11T07:49:07.918856Z","steps":["trace[1713011793] 'process raft request'  (duration: 181.049784ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:49:12 up 31 min,  0 user,  load average: 7.96, 3.19, 1.84
	Linux pause-979859 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03c0e211978ecf5f182769c51bc9eacba756e284410fadbd73e155decf990fec] <==
	I0111 07:48:47.412832       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:48:47.413281       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 07:48:47.413434       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:48:47.413455       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:48:47.413466       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:48:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:48:47.613548       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:48:47.613637       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:48:47.613653       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:48:47.613781       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:48:47.913714       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:48:47.913739       1 metrics.go:72] Registering metrics
	I0111 07:48:47.913855       1 controller.go:711] "Syncing nftables rules"
	I0111 07:48:57.615922       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:48:57.616012       1 main.go:301] handling current node
	I0111 07:49:07.615054       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:49:07.615099       1 main.go:301] handling current node
	
	
	==> kube-apiserver [87587bcd281f1c0a2aef860698f2ac71a9f66c8ad72bd9d536d718035eb38bbc] <==
	I0111 07:48:37.040160       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 07:48:37.040211       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 07:48:37.040234       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 07:48:37.040546       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 07:48:37.047755       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:48:37.048638       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:48:37.049396       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0111 07:48:37.059972       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:48:37.938231       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0111 07:48:37.941682       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0111 07:48:37.941698       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:48:38.386372       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:48:38.424257       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:48:38.552114       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 07:48:38.559110       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0111 07:48:38.560705       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:48:38.566514       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:48:38.985009       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:48:39.639758       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:48:39.651148       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 07:48:39.657619       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0111 07:48:44.641665       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:48:44.795626       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:48:44.809584       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:48:44.989540       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8cefc332f1bb940a084c44f74130ed4c2b329c7fb9503f73c7ec6a9e051e9362] <==
	I0111 07:48:43.798269       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797173       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.798284       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.798260       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797148       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797336       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.798000       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797227       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797352       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.798496       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0111 07:48:43.797226       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.800937       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-979859"
	I0111 07:48:43.801026       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0111 07:48:43.801076       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.801705       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.802339       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.797197       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.807793       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:48:43.810851       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.813540       1 range_allocator.go:433] "Set node PodCIDR" node="pause-979859" podCIDRs=["10.244.0.0/24"]
	I0111 07:48:43.897161       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:43.897235       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:48:43.897245       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:48:43.908649       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:58.804039       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [39df3868e30b956ed7b402910c52eb3d3f645eff78410c5ced0e05e07cfb2605] <==
	I0111 07:48:45.457104       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:48:45.523348       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:48:45.624087       1 shared_informer.go:377] "Caches are synced"
	I0111 07:48:45.624139       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 07:48:45.624230       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:48:45.664288       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:48:45.664357       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:48:45.673066       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:48:45.675052       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:48:45.675084       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:48:45.677400       1 config.go:200] "Starting service config controller"
	I0111 07:48:45.677473       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:48:45.677518       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:48:45.677561       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:48:45.677593       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:48:45.677616       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:48:45.677684       1 config.go:309] "Starting node config controller"
	I0111 07:48:45.677709       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:48:45.677750       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:48:45.778630       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:48:45.778749       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:48:45.778766       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4cbe4980b4381d3cedbd6d8e5865265d8171d37f8ceeeece80e97c768b8baca8] <==
	E0111 07:48:37.011945       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 07:48:37.015591       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 07:48:37.015734       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 07:48:37.019868       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 07:48:37.020009       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 07:48:37.020083       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 07:48:37.020167       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 07:48:37.033019       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 07:48:37.033150       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 07:48:37.033228       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 07:48:37.033266       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 07:48:37.033314       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 07:48:37.033373       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 07:48:37.033420       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 07:48:37.033477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 07:48:37.033571       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 07:48:37.033634       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 07:48:37.036866       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 07:48:37.869718       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0111 07:48:38.015039       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 07:48:38.081775       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 07:48:38.117285       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 07:48:38.155456       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 07:48:38.182559       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	I0111 07:48:40.706210       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.032520    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/92894192-3035-436e-bad2-126e9c157984-cni-cfg\") pod \"kindnet-8zgsx\" (UID: \"92894192-3035-436e-bad2-126e9c157984\") " pod="kube-system/kindnet-8zgsx"
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.032563    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92494e6a-e314-4b9c-93d4-5a15fe8e759f-lib-modules\") pod \"kube-proxy-99tf8\" (UID: \"92494e6a-e314-4b9c-93d4-5a15fe8e759f\") " pod="kube-system/kube-proxy-99tf8"
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.032604    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77cc\" (UniqueName: \"kubernetes.io/projected/92894192-3035-436e-bad2-126e9c157984-kube-api-access-d77cc\") pod \"kindnet-8zgsx\" (UID: \"92894192-3035-436e-bad2-126e9c157984\") " pod="kube-system/kindnet-8zgsx"
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.032633    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92494e6a-e314-4b9c-93d4-5a15fe8e759f-kube-proxy\") pod \"kube-proxy-99tf8\" (UID: \"92494e6a-e314-4b9c-93d4-5a15fe8e759f\") " pod="kube-system/kube-proxy-99tf8"
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.032658    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbkcm\" (UniqueName: \"kubernetes.io/projected/92494e6a-e314-4b9c-93d4-5a15fe8e759f-kube-api-access-xbkcm\") pod \"kube-proxy-99tf8\" (UID: \"92494e6a-e314-4b9c-93d4-5a15fe8e759f\") " pod="kube-system/kube-proxy-99tf8"
	Jan 11 07:48:45 pause-979859 kubelet[1295]: I0111 07:48:45.576338    1295 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-99tf8" podStartSLOduration=0.576316915 podStartE2EDuration="576.316915ms" podCreationTimestamp="2026-01-11 07:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:48:45.57616911 +0000 UTC m=+6.161597572" watchObservedRunningTime="2026-01-11 07:48:45.576316915 +0000 UTC m=+6.161745377"
	Jan 11 07:48:47 pause-979859 kubelet[1295]: I0111 07:48:47.581610    1295 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-8zgsx" podStartSLOduration=1.8986988089999999 podStartE2EDuration="3.581592626s" podCreationTimestamp="2026-01-11 07:48:44 +0000 UTC" firstStartedPulling="2026-01-11 07:48:45.33435978 +0000 UTC m=+5.919788242" lastFinishedPulling="2026-01-11 07:48:47.017253603 +0000 UTC m=+7.602682059" observedRunningTime="2026-01-11 07:48:47.581472025 +0000 UTC m=+8.166900487" watchObservedRunningTime="2026-01-11 07:48:47.581592626 +0000 UTC m=+8.167021100"
	Jan 11 07:48:48 pause-979859 kubelet[1295]: E0111 07:48:48.422271    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-979859" containerName="kube-apiserver"
	Jan 11 07:48:48 pause-979859 kubelet[1295]: E0111 07:48:48.976504    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-979859" containerName="kube-controller-manager"
	Jan 11 07:48:49 pause-979859 kubelet[1295]: E0111 07:48:49.604746    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-979859" containerName="etcd"
	Jan 11 07:48:53 pause-979859 kubelet[1295]: E0111 07:48:53.200453    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-979859" containerName="kube-scheduler"
	Jan 11 07:48:57 pause-979859 kubelet[1295]: I0111 07:48:57.790513    1295 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 11 07:48:57 pause-979859 kubelet[1295]: I0111 07:48:57.920990    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa1e3126-8605-4b65-8bd5-601f8a9e4971-config-volume\") pod \"coredns-7d764666f9-p8c2j\" (UID: \"aa1e3126-8605-4b65-8bd5-601f8a9e4971\") " pod="kube-system/coredns-7d764666f9-p8c2j"
	Jan 11 07:48:57 pause-979859 kubelet[1295]: I0111 07:48:57.921038    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47ljz\" (UniqueName: \"kubernetes.io/projected/aa1e3126-8605-4b65-8bd5-601f8a9e4971-kube-api-access-47ljz\") pod \"coredns-7d764666f9-p8c2j\" (UID: \"aa1e3126-8605-4b65-8bd5-601f8a9e4971\") " pod="kube-system/coredns-7d764666f9-p8c2j"
	Jan 11 07:48:58 pause-979859 kubelet[1295]: E0111 07:48:58.428402    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-979859" containerName="kube-apiserver"
	Jan 11 07:48:58 pause-979859 kubelet[1295]: E0111 07:48:58.594854    1295 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p8c2j" containerName="coredns"
	Jan 11 07:48:58 pause-979859 kubelet[1295]: I0111 07:48:58.623769    1295 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-p8c2j" podStartSLOduration=13.623747228 podStartE2EDuration="13.623747228s" podCreationTimestamp="2026-01-11 07:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:48:58.610263587 +0000 UTC m=+19.195692049" watchObservedRunningTime="2026-01-11 07:48:58.623747228 +0000 UTC m=+19.209175690"
	Jan 11 07:48:58 pause-979859 kubelet[1295]: E0111 07:48:58.982744    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-979859" containerName="kube-controller-manager"
	Jan 11 07:48:59 pause-979859 kubelet[1295]: E0111 07:48:59.596447    1295 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p8c2j" containerName="coredns"
	Jan 11 07:48:59 pause-979859 kubelet[1295]: E0111 07:48:59.606312    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-979859" containerName="etcd"
	Jan 11 07:49:00 pause-979859 kubelet[1295]: E0111 07:49:00.598316    1295 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p8c2j" containerName="coredns"
	Jan 11 07:49:07 pause-979859 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:49:07 pause-979859 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:49:07 pause-979859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 07:49:07 pause-979859 systemd[1]: kubelet.service: Consumed 1.277s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-979859 -n pause-979859
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-979859 -n pause-979859: exit status 2 (349.318864ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-979859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-086314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-086314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.607835ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:56:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-086314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-086314 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-086314 describe deploy/metrics-server -n kube-system: exit status 1 (72.907319ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-086314 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-086314
helpers_test.go:244: (dbg) docker inspect old-k8s-version-086314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e",
	        "Created": "2026-01-11T07:55:55.943439087Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285886,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:55:55.984431528Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/hosts",
	        "LogPath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e-json.log",
	        "Name": "/old-k8s-version-086314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-086314:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-086314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e",
	                "LowerDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-086314",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-086314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-086314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-086314",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-086314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b4c0367de207bdd00c91f238608d852d855615e659ff267648e08705fbf2df5a",
	            "SandboxKey": "/var/run/docker/netns/b4c0367de207",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-086314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9bb753c53c10f85ef7bfc0f02abe3d37237f5f78e229ae0d2842e242dc126077",
	                    "EndpointID": "a3bd762266d0abf4f4166d42924db850832c760b90312c269304146cf24619ee",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "06:2b:6d:bb:77:54",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-086314",
	                        "3dea01a8b9cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-086314 -n old-k8s-version-086314
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-086314 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-086314 logs -n 25: (1.164520671s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ embed-certs-285966     │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 pgrep -a kubelet                                                                                                                      │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /etc/nsswitch.conf                                                                                                           │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /etc/hosts                                                                                                                   │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /etc/resolv.conf                                                                                                             │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo crictl pods                                                                                                                      │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo crictl ps --all                                                                                                                  │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                           │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo ip a s                                                                                                                           │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo ip r s                                                                                                                           │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo iptables-save                                                                                                                    │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo iptables -t nat -L -n -v                                                                                                         │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl status kubelet --all --full --no-pager                                                                                 │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl cat kubelet --no-pager                                                                                                 │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                  │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /etc/kubernetes/kubelet.conf                                                                                                 │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /var/lib/kubelet/config.yaml                                                                                                 │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl status docker --all --full --no-pager                                                                                  │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo systemctl cat docker --no-pager                                                                                                  │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /etc/docker/daemon.json                                                                                                      │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo docker system info                                                                                                               │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo systemctl status cri-docker --all --full --no-pager                                                                              │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-086314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-086314 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo systemctl cat cri-docker --no-pager                                                                                              │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                         │ bridge-181803          │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:56:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:56:12.642993  295236 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:56:12.643397  295236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:12.643410  295236 out.go:374] Setting ErrFile to fd 2...
	I0111 07:56:12.643417  295236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:12.643747  295236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:56:12.644428  295236 out.go:368] Setting JSON to false
	I0111 07:56:12.646115  295236 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2321,"bootTime":1768115852,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:56:12.646222  295236 start.go:143] virtualization: kvm guest
	I0111 07:56:12.649772  295236 out.go:179] * [embed-certs-285966] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:56:12.657049  295236 notify.go:221] Checking for updates...
	I0111 07:56:12.657038  295236 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:56:12.658592  295236 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:56:12.660044  295236 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:56:12.661246  295236 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:56:12.662647  295236 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:56:12.663961  295236 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:56:12.666007  295236 config.go:182] Loaded profile config "bridge-181803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:56:12.666226  295236 config.go:182] Loaded profile config "no-preload-875594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:56:12.666877  295236 config.go:182] Loaded profile config "old-k8s-version-086314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 07:56:12.667046  295236 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:56:12.701025  295236 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:56:12.701142  295236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:56:12.775490  295236 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2026-01-11 07:56:12.763455329 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:56:12.775588  295236 docker.go:319] overlay module found
	I0111 07:56:12.778484  295236 out.go:179] * Using the docker driver based on user configuration
	I0111 07:56:12.779829  295236 start.go:309] selected driver: docker
	I0111 07:56:12.779851  295236 start.go:928] validating driver "docker" against <nil>
	I0111 07:56:12.779888  295236 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:56:12.780487  295236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:56:12.844073  295236 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2026-01-11 07:56:12.832602774 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:56:12.844244  295236 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:56:12.844489  295236 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:56:12.846132  295236 out.go:179] * Using Docker driver with root privileges
	I0111 07:56:12.847296  295236 cni.go:84] Creating CNI manager for ""
	I0111 07:56:12.847380  295236 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:56:12.847399  295236 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 07:56:12.847478  295236 start.go:353] cluster config:
	{Name:embed-certs-285966 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-285966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:56:12.848973  295236 out.go:179] * Starting "embed-certs-285966" primary control-plane node in "embed-certs-285966" cluster
	I0111 07:56:12.850202  295236 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:56:12.851469  295236 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:56:12.852707  295236 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:56:12.852748  295236 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:56:12.852766  295236 cache.go:65] Caching tarball of preloaded images
	I0111 07:56:12.852828  295236 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:56:12.852900  295236 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:56:12.852920  295236 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:56:12.853080  295236 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/config.json ...
	I0111 07:56:12.853114  295236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/config.json: {Name:mk6d5aafe62d7f971ef634a77f3cf8e538acc272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:12.876119  295236 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:56:12.876139  295236 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:56:12.876160  295236 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:56:12.876188  295236 start.go:360] acquireMachinesLock for embed-certs-285966: {Name:mk50d8c197b424b03264cb0566254a8e652b1255 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:56:12.876266  295236 start.go:364] duration metric: took 63.209µs to acquireMachinesLock for "embed-certs-285966"
	I0111 07:56:12.876287  295236 start.go:93] Provisioning new machine with config: &{Name:embed-certs-285966 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-285966 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:56:12.876341  295236 start.go:125] createHost starting for "" (driver="docker")
	I0111 07:56:08.184150  287667 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.409503557s)
	I0111 07:56:08.184183  287667 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-4689/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I0111 07:56:08.184205  287667 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I0111 07:56:08.184210  287667 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.352037312s)
	I0111 07:56:08.184266  287667 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I0111 07:56:08.184358  287667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:56:09.647686  287667 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.46338853s)
	I0111 07:56:09.647724  287667 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-4689/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I0111 07:56:09.647754  287667 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I0111 07:56:09.647823  287667 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I0111 07:56:09.647754  287667 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.463354088s)
	I0111 07:56:09.647923  287667 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0111 07:56:09.648073  287667 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0111 07:56:11.860359  287667 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.212251202s)
	I0111 07:56:11.860461  287667 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0111 07:56:11.860551  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0111 07:56:11.860553  287667 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (2.212704636s)
	I0111 07:56:11.860576  287667 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-4689/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I0111 07:56:11.860597  287667 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I0111 07:56:11.860641  287667 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I0111 07:56:10.816726  282961 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 07:56:10.825404  282961 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I0111 07:56:10.825433  282961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 07:56:10.853539  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 07:56:11.985881  282961 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.132299367s)
	I0111 07:56:11.985932  282961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 07:56:11.986112  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-086314 minikube.k8s.io/updated_at=2026_01_11T07_56_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=old-k8s-version-086314 minikube.k8s.io/primary=true
	I0111 07:56:11.986123  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:11.995835  282961 ops.go:34] apiserver oom_adj: -16
	I0111 07:56:12.084427  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:12.585002  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:13.084485  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:13.584694  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:14.085047  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:14.584658  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:11.945996  277748 addons.go:530] duration metric: took 1.172802282s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0111 07:56:11.951419  277748 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0111 07:56:11.953330  277748 api_server.go:141] control plane version: v1.35.0
	I0111 07:56:11.953380  277748 api_server.go:131] duration metric: took 23.816858ms to wait for apiserver health ...
	I0111 07:56:11.953395  277748 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:56:11.959207  277748 system_pods.go:59] 8 kube-system pods found
	I0111 07:56:11.959261  277748 system_pods.go:61] "coredns-7d764666f9-87qxb" [df3da5ff-544a-4e80-b0e5-5c4f1fde5c70] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:11.959274  277748 system_pods.go:61] "coredns-7d764666f9-jgvnd" [2c57b976-b57c-4690-a0f1-0b97f73dd893] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:11.959284  277748 system_pods.go:61] "etcd-bridge-181803" [70c1edce-0414-406b-ae92-1a770ad3445e] Running
	I0111 07:56:11.959291  277748 system_pods.go:61] "kube-apiserver-bridge-181803" [886985fb-7ac6-420a-b371-efda71fcb30a] Running
	I0111 07:56:11.959304  277748 system_pods.go:61] "kube-controller-manager-bridge-181803" [a2ebe042-8ea0-4090-932b-1179f59e2248] Running
	I0111 07:56:11.959312  277748 system_pods.go:61] "kube-proxy-dx6f7" [39ec2e67-dab7-4236-b016-0a4a65245372] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:56:11.959328  277748 system_pods.go:61] "kube-scheduler-bridge-181803" [78ee7349-f6dd-4599-87d1-1eda14ea33c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:56:11.959335  277748 system_pods.go:61] "storage-provisioner" [7cb23b25-94c8-4b95-b0ff-0dc54d2ce599] Pending
	I0111 07:56:11.959344  277748 system_pods.go:74] duration metric: took 5.936276ms to wait for pod list to return data ...
	I0111 07:56:11.959355  277748 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:56:11.963237  277748 default_sa.go:45] found service account: "default"
	I0111 07:56:11.963259  277748 default_sa.go:55] duration metric: took 3.897172ms for default service account to be created ...
	I0111 07:56:11.963271  277748 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:56:11.967911  277748 system_pods.go:86] 8 kube-system pods found
	I0111 07:56:11.967955  277748 system_pods.go:89] "coredns-7d764666f9-87qxb" [df3da5ff-544a-4e80-b0e5-5c4f1fde5c70] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:11.967965  277748 system_pods.go:89] "coredns-7d764666f9-jgvnd" [2c57b976-b57c-4690-a0f1-0b97f73dd893] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:11.967972  277748 system_pods.go:89] "etcd-bridge-181803" [70c1edce-0414-406b-ae92-1a770ad3445e] Running
	I0111 07:56:11.967980  277748 system_pods.go:89] "kube-apiserver-bridge-181803" [886985fb-7ac6-420a-b371-efda71fcb30a] Running
	I0111 07:56:11.967986  277748 system_pods.go:89] "kube-controller-manager-bridge-181803" [a2ebe042-8ea0-4090-932b-1179f59e2248] Running
	I0111 07:56:11.967994  277748 system_pods.go:89] "kube-proxy-dx6f7" [39ec2e67-dab7-4236-b016-0a4a65245372] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:56:11.968032  277748 system_pods.go:89] "kube-scheduler-bridge-181803" [78ee7349-f6dd-4599-87d1-1eda14ea33c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:56:11.968048  277748 system_pods.go:89] "storage-provisioner" [7cb23b25-94c8-4b95-b0ff-0dc54d2ce599] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:56:11.968081  277748 retry.go:84] will retry after 300ms: missing components: kube-dns, kube-proxy
	I0111 07:56:12.269851  277748 system_pods.go:86] 8 kube-system pods found
	I0111 07:56:12.269898  277748 system_pods.go:89] "coredns-7d764666f9-87qxb" [df3da5ff-544a-4e80-b0e5-5c4f1fde5c70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:12.269909  277748 system_pods.go:89] "coredns-7d764666f9-jgvnd" [2c57b976-b57c-4690-a0f1-0b97f73dd893] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:12.269916  277748 system_pods.go:89] "etcd-bridge-181803" [70c1edce-0414-406b-ae92-1a770ad3445e] Running
	I0111 07:56:12.269922  277748 system_pods.go:89] "kube-apiserver-bridge-181803" [886985fb-7ac6-420a-b371-efda71fcb30a] Running
	I0111 07:56:12.269927  277748 system_pods.go:89] "kube-controller-manager-bridge-181803" [a2ebe042-8ea0-4090-932b-1179f59e2248] Running
	I0111 07:56:12.269947  277748 system_pods.go:89] "kube-proxy-dx6f7" [39ec2e67-dab7-4236-b016-0a4a65245372] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:56:12.269953  277748 system_pods.go:89] "kube-scheduler-bridge-181803" [78ee7349-f6dd-4599-87d1-1eda14ea33c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:56:12.269960  277748 system_pods.go:89] "storage-provisioner" [7cb23b25-94c8-4b95-b0ff-0dc54d2ce599] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:56:12.573138  277748 system_pods.go:86] 8 kube-system pods found
	I0111 07:56:12.573180  277748 system_pods.go:89] "coredns-7d764666f9-87qxb" [df3da5ff-544a-4e80-b0e5-5c4f1fde5c70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:12.573192  277748 system_pods.go:89] "coredns-7d764666f9-jgvnd" [2c57b976-b57c-4690-a0f1-0b97f73dd893] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:12.573199  277748 system_pods.go:89] "etcd-bridge-181803" [70c1edce-0414-406b-ae92-1a770ad3445e] Running
	I0111 07:56:12.573211  277748 system_pods.go:89] "kube-apiserver-bridge-181803" [886985fb-7ac6-420a-b371-efda71fcb30a] Running
	I0111 07:56:12.573217  277748 system_pods.go:89] "kube-controller-manager-bridge-181803" [a2ebe042-8ea0-4090-932b-1179f59e2248] Running
	I0111 07:56:12.573230  277748 system_pods.go:89] "kube-proxy-dx6f7" [39ec2e67-dab7-4236-b016-0a4a65245372] Running
	I0111 07:56:12.573237  277748 system_pods.go:89] "kube-scheduler-bridge-181803" [78ee7349-f6dd-4599-87d1-1eda14ea33c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:56:12.573250  277748 system_pods.go:89] "storage-provisioner" [7cb23b25-94c8-4b95-b0ff-0dc54d2ce599] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:56:12.573267  277748 system_pods.go:126] duration metric: took 609.990302ms to wait for k8s-apps to be running ...
	I0111 07:56:12.573282  277748 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:56:12.573331  277748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:56:12.600219  277748 system_svc.go:56] duration metric: took 26.92741ms WaitForService to wait for kubelet
	I0111 07:56:12.600249  277748 kubeadm.go:587] duration metric: took 1.827089081s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:56:12.600271  277748 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:56:12.605270  277748 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:56:12.605345  277748 node_conditions.go:123] node cpu capacity is 8
	I0111 07:56:12.605380  277748 node_conditions.go:105] duration metric: took 5.10334ms to run NodePressure ...
	I0111 07:56:12.605423  277748 start.go:242] waiting for startup goroutines ...
	I0111 07:56:12.605449  277748 start.go:247] waiting for cluster config update ...
	I0111 07:56:12.605492  277748 start.go:256] writing updated cluster config ...
	I0111 07:56:12.605885  277748 ssh_runner.go:195] Run: rm -f paused
	I0111 07:56:12.611086  277748 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:56:12.616263  277748 pod_ready.go:83] waiting for pod "coredns-7d764666f9-87qxb" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:14.123586  277748 pod_ready.go:94] pod "coredns-7d764666f9-87qxb" is "Ready"
	I0111 07:56:14.123619  277748 pod_ready.go:86] duration metric: took 1.507262424s for pod "coredns-7d764666f9-87qxb" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:14.123632  277748 pod_ready.go:83] waiting for pod "coredns-7d764666f9-jgvnd" in "kube-system" namespace to be "Ready" or be gone ...
	W0111 07:56:16.129537  277748 pod_ready.go:104] pod "coredns-7d764666f9-jgvnd" is not "Ready", error: <nil>
	I0111 07:56:12.880268  295236 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 07:56:12.880490  295236 start.go:159] libmachine.API.Create for "embed-certs-285966" (driver="docker")
	I0111 07:56:12.880523  295236 client.go:173] LocalClient.Create starting
	I0111 07:56:12.880609  295236 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem
	I0111 07:56:12.880664  295236 main.go:144] libmachine: Decoding PEM data...
	I0111 07:56:12.880700  295236 main.go:144] libmachine: Parsing certificate...
	I0111 07:56:12.880758  295236 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem
	I0111 07:56:12.880795  295236 main.go:144] libmachine: Decoding PEM data...
	I0111 07:56:12.880828  295236 main.go:144] libmachine: Parsing certificate...
	I0111 07:56:12.881178  295236 cli_runner.go:164] Run: docker network inspect embed-certs-285966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 07:56:12.899714  295236 cli_runner.go:211] docker network inspect embed-certs-285966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 07:56:12.899823  295236 network_create.go:284] running [docker network inspect embed-certs-285966] to gather additional debugging logs...
	I0111 07:56:12.899870  295236 cli_runner.go:164] Run: docker network inspect embed-certs-285966
	W0111 07:56:12.918857  295236 cli_runner.go:211] docker network inspect embed-certs-285966 returned with exit code 1
	I0111 07:56:12.918889  295236 network_create.go:287] error running [docker network inspect embed-certs-285966]: docker network inspect embed-certs-285966: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-285966 not found
	I0111 07:56:12.918906  295236 network_create.go:289] output of [docker network inspect embed-certs-285966]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-285966 not found
	
	** /stderr **
	I0111 07:56:12.919024  295236 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:56:12.938599  295236 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-97182e010def IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7c:25:92:4b:a6} reservation:<nil>}
	I0111 07:56:12.939407  295236 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2c82d96e68c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:e5:3c:51:ff:2e} reservation:<nil>}
	I0111 07:56:12.940370  295236 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a487bdecf87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:a6:1b:9d:11:3d} reservation:<nil>}
	I0111 07:56:12.941103  295236 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e8428f34acce IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ee:dd:0c:57:0e:02} reservation:<nil>}
	I0111 07:56:12.942042  295236 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f72b30}
	I0111 07:56:12.942067  295236 network_create.go:124] attempt to create docker network embed-certs-285966 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 07:56:12.942105  295236 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-285966 embed-certs-285966
	I0111 07:56:12.996370  295236 network_create.go:108] docker network embed-certs-285966 192.168.85.0/24 created
	I0111 07:56:12.996425  295236 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-285966" container
	I0111 07:56:12.996532  295236 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 07:56:13.016559  295236 cli_runner.go:164] Run: docker volume create embed-certs-285966 --label name.minikube.sigs.k8s.io=embed-certs-285966 --label created_by.minikube.sigs.k8s.io=true
	I0111 07:56:13.038076  295236 oci.go:103] Successfully created a docker volume embed-certs-285966
	I0111 07:56:13.038171  295236 cli_runner.go:164] Run: docker run --rm --name embed-certs-285966-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-285966 --entrypoint /usr/bin/test -v embed-certs-285966:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 07:56:13.753224  295236 oci.go:107] Successfully prepared a docker volume embed-certs-285966
	I0111 07:56:13.753280  295236 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:56:13.753289  295236 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 07:56:13.753346  295236 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-285966:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 07:56:13.578084  287667 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.717416755s)
	I0111 07:56:13.578121  287667 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-4689/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 from cache
	I0111 07:56:13.578146  287667 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I0111 07:56:13.578194  287667 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I0111 07:56:15.150673  287667 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.572454544s)
	I0111 07:56:15.150710  287667 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-4689/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I0111 07:56:15.150738  287667 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I0111 07:56:15.150794  287667 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I0111 07:56:17.932079  287667 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (2.781235573s)
	I0111 07:56:17.932108  287667 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-4689/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I0111 07:56:17.932138  287667 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0111 07:56:17.932212  287667 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0111 07:56:15.085367  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:15.584661  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:16.085183  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:16.584716  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:17.084901  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:17.585106  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:18.085505  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:18.584987  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:19.084603  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:19.584993  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:18.674676  287667 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-4689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0111 07:56:18.674722  287667 cache_images.go:125] Successfully loaded all cached images
	I0111 07:56:18.674729  287667 cache_images.go:94] duration metric: took 13.130256899s to LoadCachedImages
	I0111 07:56:18.674744  287667 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0111 07:56:18.674938  287667 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-875594 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-875594 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 07:56:18.675034  287667 ssh_runner.go:195] Run: crio config
	I0111 07:56:18.727824  287667 cni.go:84] Creating CNI manager for ""
	I0111 07:56:18.727848  287667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:56:18.727862  287667 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 07:56:18.727882  287667 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-875594 NodeName:no-preload-875594 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 07:56:18.728011  287667 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-875594"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 07:56:18.728071  287667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 07:56:18.737054  287667 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I0111 07:56:18.737120  287667 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I0111 07:56:18.746275  287667 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
	I0111 07:56:18.746291  287667 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet.sha256
	I0111 07:56:18.746324  287667 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm.sha256
	I0111 07:56:18.746349  287667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:56:18.746370  287667 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I0111 07:56:18.746397  287667 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I0111 07:56:18.752175  287667 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I0111 07:56:18.752229  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/cache/linux/amd64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (72368312 bytes)
	I0111 07:56:18.752389  287667 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I0111 07:56:18.752430  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/cache/linux/amd64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (58597560 bytes)
	I0111 07:56:18.766330  287667 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I0111 07:56:18.797399  287667 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I0111 07:56:18.797438  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/cache/linux/amd64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (58110244 bytes)
	I0111 07:56:19.371397  287667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 07:56:19.381524  287667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0111 07:56:19.395453  287667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 07:56:19.502470  287667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0111 07:56:19.517685  287667 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 07:56:19.521954  287667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:56:19.628951  287667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:56:19.725173  287667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:56:19.751989  287667 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594 for IP: 192.168.76.2
	I0111 07:56:19.752011  287667 certs.go:195] generating shared ca certs ...
	I0111 07:56:19.752029  287667 certs.go:227] acquiring lock for ca certs: {Name:mk9f7a57f61b85f2c0f52e44b6a926769e368f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:19.752191  287667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key
	I0111 07:56:19.752253  287667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key
	I0111 07:56:19.752266  287667 certs.go:257] generating profile certs ...
	I0111 07:56:19.752343  287667 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/client.key
	I0111 07:56:19.752361  287667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/client.crt with IP's: []
	I0111 07:56:19.873130  287667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/client.crt ...
	I0111 07:56:19.873217  287667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/client.crt: {Name:mk61f6d7d78609472ded48d700c6eb27563b713d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:19.873441  287667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/client.key ...
	I0111 07:56:19.873461  287667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/client.key: {Name:mkd75735e9594f420a8fb24e96f3359c15e77227 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:19.873584  287667 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.key.cd03be7e
	I0111 07:56:19.873606  287667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.crt.cd03be7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0111 07:56:19.917155  287667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.crt.cd03be7e ...
	I0111 07:56:19.917184  287667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.crt.cd03be7e: {Name:mk40875551a37c5489628fb66e292e68fd7851c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:19.917353  287667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.key.cd03be7e ...
	I0111 07:56:19.917370  287667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.key.cd03be7e: {Name:mk453c868b0f7810272481c338bc58e0d320f5db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:19.917469  287667 certs.go:382] copying /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.crt.cd03be7e -> /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.crt
	I0111 07:56:19.917567  287667 certs.go:386] copying /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.key.cd03be7e -> /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.key
	I0111 07:56:19.917657  287667 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/proxy-client.key
	I0111 07:56:19.917678  287667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/proxy-client.crt with IP's: []
	I0111 07:56:20.058894  287667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/proxy-client.crt ...
	I0111 07:56:20.058926  287667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/proxy-client.crt: {Name:mk4a646cdd1d5a47ef543d4c84cfd0471ce14d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:20.059085  287667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/proxy-client.key ...
	I0111 07:56:20.059098  287667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/proxy-client.key: {Name:mkb47ebea86d30c51c9363c7ba4736992aac58b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:20.059270  287667 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem (1338 bytes)
	W0111 07:56:20.059311  287667 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238_empty.pem, impossibly tiny 0 bytes
	I0111 07:56:20.059328  287667 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 07:56:20.059354  287667 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem (1082 bytes)
	I0111 07:56:20.059378  287667 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem (1123 bytes)
	I0111 07:56:20.059404  287667 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem (1675 bytes)
	I0111 07:56:20.059450  287667 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:56:20.060250  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 07:56:20.079788  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 07:56:20.098305  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 07:56:20.117357  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 07:56:20.137203  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0111 07:56:20.158133  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0111 07:56:20.177261  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 07:56:20.197056  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 07:56:20.215474  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /usr/share/ca-certificates/82382.pem (1708 bytes)
	I0111 07:56:20.235492  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 07:56:20.255404  287667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem --> /usr/share/ca-certificates/8238.pem (1338 bytes)
	I0111 07:56:20.273412  287667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 07:56:20.287765  287667 ssh_runner.go:195] Run: openssl version
	I0111 07:56:20.295506  287667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/82382.pem
	I0111 07:56:20.303645  287667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/82382.pem /etc/ssl/certs/82382.pem
	I0111 07:56:20.311227  287667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82382.pem
	I0111 07:56:20.315331  287667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:27 /usr/share/ca-certificates/82382.pem
	I0111 07:56:20.315381  287667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82382.pem
	I0111 07:56:20.351037  287667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 07:56:20.359234  287667 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/82382.pem /etc/ssl/certs/3ec20f2e.0
	I0111 07:56:20.367401  287667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:56:20.375397  287667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 07:56:20.383255  287667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:56:20.387973  287667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:23 /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:56:20.388040  287667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:56:20.424046  287667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 07:56:20.432948  287667 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 07:56:20.441832  287667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8238.pem
	I0111 07:56:20.451026  287667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8238.pem /etc/ssl/certs/8238.pem
	I0111 07:56:20.459525  287667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8238.pem
	I0111 07:56:20.463719  287667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:27 /usr/share/ca-certificates/8238.pem
	I0111 07:56:20.463766  287667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8238.pem
	I0111 07:56:20.502239  287667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 07:56:20.510923  287667 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8238.pem /etc/ssl/certs/51391683.0
	I0111 07:56:20.519773  287667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 07:56:20.524094  287667 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 07:56:20.524146  287667 kubeadm.go:401] StartCluster: {Name:no-preload-875594 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-875594 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:56:20.524238  287667 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:56:20.524301  287667 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:56:20.554627  287667 cri.go:96] found id: ""
	I0111 07:56:20.554700  287667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 07:56:20.564399  287667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 07:56:20.573676  287667 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 07:56:20.573741  287667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 07:56:20.582105  287667 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 07:56:20.582122  287667 kubeadm.go:158] found existing configuration files:
	
	I0111 07:56:20.582168  287667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 07:56:20.590481  287667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 07:56:20.590541  287667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 07:56:20.598465  287667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 07:56:20.607681  287667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 07:56:20.607733  287667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 07:56:20.616444  287667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 07:56:20.624850  287667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 07:56:20.624911  287667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 07:56:20.633033  287667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 07:56:20.642181  287667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 07:56:20.642242  287667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 07:56:20.651536  287667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 07:56:20.695966  287667 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 07:56:20.696047  287667 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 07:56:20.763410  287667 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 07:56:20.763497  287667 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I0111 07:56:20.763540  287667 kubeadm.go:319] OS: Linux
	I0111 07:56:20.763612  287667 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 07:56:20.763679  287667 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 07:56:20.763740  287667 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 07:56:20.763848  287667 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 07:56:20.763913  287667 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 07:56:20.763988  287667 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 07:56:20.764076  287667 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 07:56:20.764157  287667 kubeadm.go:319] CGROUPS_IO: enabled
	I0111 07:56:20.829347  287667 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 07:56:20.829523  287667 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 07:56:20.829690  287667 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 07:56:20.846887  287667 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0111 07:56:18.130251  277748 pod_ready.go:104] pod "coredns-7d764666f9-jgvnd" is not "Ready", error: <nil>
	I0111 07:56:19.743984  277748 pod_ready.go:99] pod "coredns-7d764666f9-jgvnd" in "kube-system" namespace is gone: getting pod "coredns-7d764666f9-jgvnd" in "kube-system" namespace (will retry): pods "coredns-7d764666f9-jgvnd" not found
	I0111 07:56:19.744011  277748 pod_ready.go:86] duration metric: took 5.620371755s for pod "coredns-7d764666f9-jgvnd" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:19.746966  277748 pod_ready.go:83] waiting for pod "etcd-bridge-181803" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:18.289622  295236 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-285966:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.536217216s)
	I0111 07:56:18.289655  295236 kic.go:203] duration metric: took 4.536362288s to extract preloaded images to volume ...
	W0111 07:56:18.289739  295236 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0111 07:56:18.289777  295236 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0111 07:56:18.289836  295236 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 07:56:18.351777  295236 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-285966 --name embed-certs-285966 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-285966 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-285966 --network embed-certs-285966 --ip 192.168.85.2 --volume embed-certs-285966:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 07:56:18.812665  295236 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Running}}
	I0111 07:56:18.836856  295236 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:56:18.859192  295236 cli_runner.go:164] Run: docker exec embed-certs-285966 stat /var/lib/dpkg/alternatives/iptables
	I0111 07:56:18.912689  295236 oci.go:144] the created container "embed-certs-285966" has a running status.
	I0111 07:56:18.912724  295236 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa...
	I0111 07:56:19.105269  295236 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 07:56:19.145682  295236 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:56:19.172563  295236 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 07:56:19.172587  295236 kic_runner.go:114] Args: [docker exec --privileged embed-certs-285966 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 07:56:19.252043  295236 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:56:19.276894  295236 machine.go:94] provisionDockerMachine start ...
	I0111 07:56:19.277004  295236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:56:19.302088  295236 main.go:144] libmachine: Using SSH client type: native
	I0111 07:56:19.302437  295236 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0111 07:56:19.302465  295236 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 07:56:19.480977  295236 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-285966
	
	I0111 07:56:19.481004  295236 ubuntu.go:182] provisioning hostname "embed-certs-285966"
	I0111 07:56:19.481078  295236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:56:19.500071  295236 main.go:144] libmachine: Using SSH client type: native
	I0111 07:56:19.500339  295236 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0111 07:56:19.500362  295236 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-285966 && echo "embed-certs-285966" | sudo tee /etc/hostname
	I0111 07:56:19.744257  295236 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-285966
	
	I0111 07:56:19.744367  295236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:56:19.770481  295236 main.go:144] libmachine: Using SSH client type: native
	I0111 07:56:19.770768  295236 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0111 07:56:19.770828  295236 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-285966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-285966/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-285966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 07:56:19.926281  295236 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 07:56:19.926313  295236 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-4689/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-4689/.minikube}
	I0111 07:56:19.926345  295236 ubuntu.go:190] setting up certificates
	I0111 07:56:19.926357  295236 provision.go:84] configureAuth start
	I0111 07:56:19.926416  295236 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-285966
	I0111 07:56:19.946454  295236 provision.go:143] copyHostCerts
	I0111 07:56:19.946515  295236 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem, removing ...
	I0111 07:56:19.946525  295236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem
	I0111 07:56:19.946598  295236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem (1082 bytes)
	I0111 07:56:19.946705  295236 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem, removing ...
	I0111 07:56:19.946717  295236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem
	I0111 07:56:19.946752  295236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem (1123 bytes)
	I0111 07:56:19.946833  295236 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem, removing ...
	I0111 07:56:19.946843  295236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem
	I0111 07:56:19.946887  295236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem (1675 bytes)
	I0111 07:56:19.946953  295236 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem org=jenkins.embed-certs-285966 san=[127.0.0.1 192.168.85.2 embed-certs-285966 localhost minikube]
	I0111 07:56:19.991512  295236 provision.go:177] copyRemoteCerts
	I0111 07:56:19.991573  295236 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 07:56:19.991617  295236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:56:20.012632  295236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:56:20.117294  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 07:56:20.137950  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 07:56:20.157990  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 07:56:20.175390  295236 provision.go:87] duration metric: took 249.02095ms to configureAuth
	I0111 07:56:20.175423  295236 ubuntu.go:206] setting minikube options for container-runtime
	I0111 07:56:20.175638  295236 config.go:182] Loaded profile config "embed-certs-285966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:56:20.175752  295236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:56:20.195552  295236 main.go:144] libmachine: Using SSH client type: native
	I0111 07:56:20.195819  295236 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0111 07:56:20.195839  295236 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 07:56:20.487391  295236 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 07:56:20.487423  295236 machine.go:97] duration metric: took 1.210507848s to provisionDockerMachine
	I0111 07:56:20.487437  295236 client.go:176] duration metric: took 7.606907702s to LocalClient.Create
	I0111 07:56:20.487449  295236 start.go:167] duration metric: took 7.606959544s to libmachine.API.Create "embed-certs-285966"
	I0111 07:56:20.487461  295236 start.go:293] postStartSetup for "embed-certs-285966" (driver="docker")
	I0111 07:56:20.487474  295236 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 07:56:20.487537  295236 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 07:56:20.487586  295236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:56:20.506811  295236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:56:20.614784  295236 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 07:56:20.618536  295236 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 07:56:20.618564  295236 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 07:56:20.618577  295236 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/addons for local assets ...
	I0111 07:56:20.618632  295236 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/files for local assets ...
	I0111 07:56:20.618731  295236 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem -> 82382.pem in /etc/ssl/certs
	I0111 07:56:20.618878  295236 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 07:56:20.626529  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:56:20.648466  295236 start.go:296] duration metric: took 160.992153ms for postStartSetup
	I0111 07:56:20.648851  295236 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-285966
	I0111 07:56:20.669469  295236 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/config.json ...
	I0111 07:56:20.669730  295236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:56:20.669767  295236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:56:20.689451  295236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:56:20.790114  295236 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 07:56:20.794905  295236 start.go:128] duration metric: took 7.918549992s to createHost
	I0111 07:56:20.794933  295236 start.go:83] releasing machines lock for "embed-certs-285966", held for 7.918655982s
	I0111 07:56:20.795024  295236 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-285966
	I0111 07:56:20.814508  295236 ssh_runner.go:195] Run: cat /version.json
	I0111 07:56:20.814546  295236 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 07:56:20.814584  295236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:56:20.814604  295236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:56:20.835607  295236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:56:20.837688  295236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:56:20.937137  295236 ssh_runner.go:195] Run: systemctl --version
	I0111 07:56:20.992140  295236 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 07:56:21.031104  295236 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 07:56:21.035775  295236 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 07:56:21.035873  295236 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 07:56:21.060882  295236 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0111 07:56:21.060908  295236 start.go:496] detecting cgroup driver to use...
	I0111 07:56:21.060940  295236 detect.go:178] detected "systemd" cgroup driver on host os
	I0111 07:56:21.060998  295236 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 07:56:21.076872  295236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 07:56:21.088973  295236 docker.go:218] disabling cri-docker service (if available) ...
	I0111 07:56:21.089020  295236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 07:56:21.107132  295236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 07:56:21.128611  295236 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 07:56:21.212140  295236 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 07:56:21.308017  295236 docker.go:234] disabling docker service ...
	I0111 07:56:21.308084  295236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 07:56:21.329245  295236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 07:56:21.341618  295236 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 07:56:21.425916  295236 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 07:56:21.519004  295236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 07:56:21.532940  295236 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 07:56:21.548050  295236 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 07:56:21.548111  295236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:56:21.558321  295236 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 07:56:21.558385  295236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:56:21.567016  295236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:56:21.575306  295236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:56:21.584440  295236 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 07:56:21.592728  295236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:56:21.602479  295236 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:56:21.617156  295236 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:56:21.625885  295236 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 07:56:21.633721  295236 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 07:56:21.642113  295236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:56:21.731857  295236 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 07:56:21.859987  295236 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 07:56:21.860061  295236 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 07:56:21.864400  295236 start.go:574] Will wait 60s for crictl version
	I0111 07:56:21.864465  295236 ssh_runner.go:195] Run: which crictl
	I0111 07:56:21.868291  295236 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 07:56:21.894998  295236 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 07:56:21.895082  295236 ssh_runner.go:195] Run: crio --version
	I0111 07:56:21.922276  295236 ssh_runner.go:195] Run: crio --version
	I0111 07:56:21.953905  295236 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 07:56:20.851792  287667 out.go:252]   - Generating certificates and keys ...
	I0111 07:56:20.851920  287667 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 07:56:20.852023  287667 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 07:56:20.984725  287667 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 07:56:21.107737  287667 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 07:56:21.266219  287667 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 07:56:21.363554  287667 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 07:56:21.411061  287667 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 07:56:21.411286  287667 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-875594] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 07:56:21.582609  287667 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 07:56:21.582768  287667 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-875594] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 07:56:21.614451  287667 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 07:56:21.687161  287667 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 07:56:21.843254  287667 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 07:56:21.843381  287667 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 07:56:21.879679  287667 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 07:56:21.964395  287667 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 07:56:22.142774  287667 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 07:56:22.263084  287667 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 07:56:22.359071  287667 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 07:56:22.359545  287667 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 07:56:22.366376  287667 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 07:56:21.955155  295236 cli_runner.go:164] Run: docker network inspect embed-certs-285966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:56:21.973353  295236 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 07:56:21.977679  295236 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:56:21.989249  295236 kubeadm.go:884] updating cluster {Name:embed-certs-285966 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-285966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 07:56:21.989428  295236 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:56:21.989500  295236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:56:22.025849  295236 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:56:22.025870  295236 crio.go:433] Images already preloaded, skipping extraction
	I0111 07:56:22.025936  295236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:56:22.054142  295236 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:56:22.054168  295236 cache_images.go:86] Images are preloaded, skipping loading
	I0111 07:56:22.054177  295236 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 07:56:22.054282  295236 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-285966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-285966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 07:56:22.054371  295236 ssh_runner.go:195] Run: crio config
	I0111 07:56:22.111646  295236 cni.go:84] Creating CNI manager for ""
	I0111 07:56:22.111669  295236 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:56:22.111684  295236 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 07:56:22.111714  295236 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-285966 NodeName:embed-certs-285966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 07:56:22.111883  295236 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-285966"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 07:56:22.111950  295236 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 07:56:22.120962  295236 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 07:56:22.121043  295236 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 07:56:22.129548  295236 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0111 07:56:22.143812  295236 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 07:56:22.161228  295236 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I0111 07:56:22.173947  295236 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 07:56:22.177537  295236 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:56:22.187498  295236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:56:22.271534  295236 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:56:22.295303  295236 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966 for IP: 192.168.85.2
	I0111 07:56:22.295328  295236 certs.go:195] generating shared ca certs ...
	I0111 07:56:22.295353  295236 certs.go:227] acquiring lock for ca certs: {Name:mk9f7a57f61b85f2c0f52e44b6a926769e368f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:22.295511  295236 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key
	I0111 07:56:22.295553  295236 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key
	I0111 07:56:22.295564  295236 certs.go:257] generating profile certs ...
	I0111 07:56:22.295615  295236 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/client.key
	I0111 07:56:22.295629  295236 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/client.crt with IP's: []
	I0111 07:56:22.455843  295236 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/client.crt ...
	I0111 07:56:22.455873  295236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/client.crt: {Name:mka2eab7c90591bb6738f165f4dd4dcac2137503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:22.456036  295236 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/client.key ...
	I0111 07:56:22.456050  295236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/client.key: {Name:mkabec0e7f03c7ff27e8ef4c7caba315e2bb27ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:22.456127  295236 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.key.9d1949d7
	I0111 07:56:22.456141  295236 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.crt.9d1949d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0111 07:56:22.489389  295236 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.crt.9d1949d7 ...
	I0111 07:56:22.489413  295236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.crt.9d1949d7: {Name:mk42474841f15342acdff88c99e5160901b3c9b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:22.489542  295236 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.key.9d1949d7 ...
	I0111 07:56:22.489555  295236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.key.9d1949d7: {Name:mk36d7b2625dbc1c37b899011409da602bb9c7cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:22.489625  295236 certs.go:382] copying /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.crt.9d1949d7 -> /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.crt
	I0111 07:56:22.489695  295236 certs.go:386] copying /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.key.9d1949d7 -> /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.key
	I0111 07:56:22.489751  295236 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/proxy-client.key
	I0111 07:56:22.489768  295236 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/proxy-client.crt with IP's: []
	I0111 07:56:22.621549  295236 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/proxy-client.crt ...
	I0111 07:56:22.621577  295236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/proxy-client.crt: {Name:mkb746a2f7592ddac51cfa51438b6b462ba9f9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:22.621773  295236 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/proxy-client.key ...
	I0111 07:56:22.621791  295236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/proxy-client.key: {Name:mkf6bba8f12067dd7f231ac47b5fa5b09fc31a54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:22.622010  295236 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem (1338 bytes)
	W0111 07:56:22.622051  295236 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238_empty.pem, impossibly tiny 0 bytes
	I0111 07:56:22.622059  295236 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 07:56:22.622087  295236 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem (1082 bytes)
	I0111 07:56:22.622114  295236 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem (1123 bytes)
	I0111 07:56:22.622138  295236 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem (1675 bytes)
	I0111 07:56:22.622175  295236 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:56:22.622666  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 07:56:22.367862  287667 out.go:252]   - Booting up control plane ...
	I0111 07:56:22.368021  287667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 07:56:22.368162  287667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 07:56:22.369128  287667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 07:56:22.383205  287667 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 07:56:22.383360  287667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 07:56:22.390116  287667 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 07:56:22.390370  287667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 07:56:22.390420  287667 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 07:56:22.503096  287667 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 07:56:22.503263  287667 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 07:56:20.085028  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:20.585412  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:21.084632  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:21.585056  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:22.084997  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:22.585041  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:23.084627  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:23.587417  282961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:23.683944  282961 kubeadm.go:1114] duration metric: took 11.697936454s to wait for elevateKubeSystemPrivileges
	I0111 07:56:23.683986  282961 kubeadm.go:403] duration metric: took 22.883195305s to StartCluster
	I0111 07:56:23.684007  282961 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:23.684075  282961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:56:23.685239  282961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:23.687871  282961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 07:56:23.688245  282961 config.go:182] Loaded profile config "old-k8s-version-086314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 07:56:23.688319  282961 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:56:23.688365  282961 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:56:23.688541  282961 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-086314"
	I0111 07:56:23.688559  282961 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-086314"
	I0111 07:56:23.688747  282961 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-086314"
	I0111 07:56:23.688772  282961 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-086314"
	I0111 07:56:23.688851  282961 host.go:66] Checking if "old-k8s-version-086314" exists ...
	I0111 07:56:23.689936  282961 cli_runner.go:164] Run: docker container inspect old-k8s-version-086314 --format={{.State.Status}}
	I0111 07:56:23.690338  282961 cli_runner.go:164] Run: docker container inspect old-k8s-version-086314 --format={{.State.Status}}
	I0111 07:56:23.690470  282961 out.go:179] * Verifying Kubernetes components...
	I0111 07:56:23.692233  282961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:56:23.731665  282961 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-086314"
	I0111 07:56:23.731728  282961 host.go:66] Checking if "old-k8s-version-086314" exists ...
	I0111 07:56:23.732331  282961 cli_runner.go:164] Run: docker container inspect old-k8s-version-086314 --format={{.State.Status}}
	I0111 07:56:23.737020  282961 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:56:23.739018  282961 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:56:23.739036  282961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:56:23.739091  282961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-086314
	I0111 07:56:23.773210  282961 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:56:23.773233  282961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:56:23.773374  282961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-086314
	I0111 07:56:23.784030  282961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/old-k8s-version-086314/id_rsa Username:docker}
	I0111 07:56:23.806713  282961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/old-k8s-version-086314/id_rsa Username:docker}
	I0111 07:56:23.870994  282961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 07:56:23.898320  282961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:56:23.924615  282961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:56:23.965041  282961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:56:24.195458  282961 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0111 07:56:24.197247  282961 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-086314" to be "Ready" ...
	I0111 07:56:24.462403  282961 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0111 07:56:24.463737  282961 addons.go:530] duration metric: took 775.367881ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W0111 07:56:21.753204  277748 pod_ready.go:104] pod "etcd-bridge-181803" is not "Ready", error: <nil>
	I0111 07:56:23.760042  277748 pod_ready.go:94] pod "etcd-bridge-181803" is "Ready"
	I0111 07:56:23.760089  277748 pod_ready.go:86] duration metric: took 4.013098137s for pod "etcd-bridge-181803" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:23.767122  277748 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-181803" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:25.274383  277748 pod_ready.go:94] pod "kube-apiserver-bridge-181803" is "Ready"
	I0111 07:56:25.274414  277748 pod_ready.go:86] duration metric: took 1.507264822s for pod "kube-apiserver-bridge-181803" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:25.276818  277748 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-181803" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:25.281848  277748 pod_ready.go:94] pod "kube-controller-manager-bridge-181803" is "Ready"
	I0111 07:56:25.281875  277748 pod_ready.go:86] duration metric: took 5.034155ms for pod "kube-controller-manager-bridge-181803" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:25.284898  277748 pod_ready.go:83] waiting for pod "kube-proxy-dx6f7" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:25.288942  277748 pod_ready.go:94] pod "kube-proxy-dx6f7" is "Ready"
	I0111 07:56:25.288966  277748 pod_ready.go:86] duration metric: took 4.028593ms for pod "kube-proxy-dx6f7" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:25.350417  277748 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-181803" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:25.751245  277748 pod_ready.go:94] pod "kube-scheduler-bridge-181803" is "Ready"
	I0111 07:56:25.751277  277748 pod_ready.go:86] duration metric: took 400.830127ms for pod "kube-scheduler-bridge-181803" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:25.751294  277748 pod_ready.go:40] duration metric: took 13.140137568s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:56:25.803406  277748 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0111 07:56:25.804820  277748 out.go:179] * Done! kubectl is now configured to use "bridge-181803" cluster and "default" namespace by default
	I0111 07:56:23.004691  287667 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.808438ms
	I0111 07:56:23.007854  287667 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0111 07:56:23.007972  287667 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0111 07:56:23.008097  287667 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0111 07:56:23.008235  287667 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0111 07:56:23.514113  287667 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 506.019541ms
	I0111 07:56:25.116313  287667 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.108114971s
	I0111 07:56:27.010008  287667 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001954931s
	I0111 07:56:27.032479  287667 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 07:56:27.044655  287667 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 07:56:27.054463  287667 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 07:56:27.054759  287667 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-875594 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 07:56:27.062711  287667 kubeadm.go:319] [bootstrap-token] Using token: fb8vka.dggbjkt10oymlx71
	I0111 07:56:22.643456  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 07:56:22.665401  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 07:56:22.688386  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 07:56:22.708914  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0111 07:56:22.726281  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0111 07:56:22.743431  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 07:56:22.762105  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 07:56:22.779302  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 07:56:22.798953  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem --> /usr/share/ca-certificates/8238.pem (1338 bytes)
	I0111 07:56:22.818384  295236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /usr/share/ca-certificates/82382.pem (1708 bytes)
	I0111 07:56:22.835870  295236 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 07:56:22.849369  295236 ssh_runner.go:195] Run: openssl version
	I0111 07:56:22.855274  295236 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:56:22.862754  295236 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 07:56:22.870168  295236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:56:22.873974  295236 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:23 /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:56:22.874024  295236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:56:22.908432  295236 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 07:56:22.916067  295236 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 07:56:22.923301  295236 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8238.pem
	I0111 07:56:22.930489  295236 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8238.pem /etc/ssl/certs/8238.pem
	I0111 07:56:22.937433  295236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8238.pem
	I0111 07:56:22.941849  295236 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:27 /usr/share/ca-certificates/8238.pem
	I0111 07:56:22.941894  295236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8238.pem
	I0111 07:56:22.975375  295236 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 07:56:22.983273  295236 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8238.pem /etc/ssl/certs/51391683.0
	I0111 07:56:22.990521  295236 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/82382.pem
	I0111 07:56:22.997573  295236 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/82382.pem /etc/ssl/certs/82382.pem
	I0111 07:56:23.006293  295236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82382.pem
	I0111 07:56:23.010370  295236 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:27 /usr/share/ca-certificates/82382.pem
	I0111 07:56:23.010415  295236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82382.pem
	I0111 07:56:23.045605  295236 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 07:56:23.054717  295236 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/82382.pem /etc/ssl/certs/3ec20f2e.0
	I0111 07:56:23.062750  295236 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 07:56:23.066937  295236 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 07:56:23.066989  295236 kubeadm.go:401] StartCluster: {Name:embed-certs-285966 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-285966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:56:23.067072  295236 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:56:23.067119  295236 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:56:23.094218  295236 cri.go:96] found id: ""
	I0111 07:56:23.094281  295236 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 07:56:23.102366  295236 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 07:56:23.111029  295236 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 07:56:23.111083  295236 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 07:56:23.120834  295236 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 07:56:23.120854  295236 kubeadm.go:158] found existing configuration files:
	
	I0111 07:56:23.120897  295236 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 07:56:23.131752  295236 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 07:56:23.131830  295236 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 07:56:23.141235  295236 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 07:56:23.151929  295236 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 07:56:23.151986  295236 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 07:56:23.160606  295236 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 07:56:23.170417  295236 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 07:56:23.170479  295236 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 07:56:23.179368  295236 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 07:56:23.189202  295236 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 07:56:23.189259  295236 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 07:56:23.197383  295236 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 07:56:23.332139  295236 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I0111 07:56:23.404013  295236 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 07:56:27.068095  287667 out.go:252]   - Configuring RBAC rules ...
	I0111 07:56:27.068246  287667 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 07:56:27.070992  287667 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 07:56:27.075744  287667 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 07:56:27.080733  287667 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 07:56:27.084205  287667 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 07:56:27.087330  287667 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 07:56:27.417681  287667 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 07:56:27.839968  287667 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 07:56:28.416524  287667 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 07:56:28.417766  287667 kubeadm.go:319] 
	I0111 07:56:28.417927  287667 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 07:56:28.417940  287667 kubeadm.go:319] 
	I0111 07:56:28.418074  287667 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 07:56:28.418108  287667 kubeadm.go:319] 
	I0111 07:56:28.418162  287667 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 07:56:28.418265  287667 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 07:56:28.418338  287667 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 07:56:28.418348  287667 kubeadm.go:319] 
	I0111 07:56:28.418423  287667 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 07:56:28.418432  287667 kubeadm.go:319] 
	I0111 07:56:28.418497  287667 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 07:56:28.418505  287667 kubeadm.go:319] 
	I0111 07:56:28.418579  287667 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 07:56:28.418675  287667 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 07:56:28.418765  287667 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 07:56:28.418771  287667 kubeadm.go:319] 
	I0111 07:56:28.418921  287667 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 07:56:28.419014  287667 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 07:56:28.419020  287667 kubeadm.go:319] 
	I0111 07:56:28.419129  287667 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fb8vka.dggbjkt10oymlx71 \
	I0111 07:56:28.419241  287667 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c84f368e6b8058d9b8f5e45cbb9f009ab0e06984b3c0d5786b368fcef9c9a8fc \
	I0111 07:56:28.419268  287667 kubeadm.go:319] 	--control-plane 
	I0111 07:56:28.419273  287667 kubeadm.go:319] 
	I0111 07:56:28.419379  287667 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 07:56:28.419391  287667 kubeadm.go:319] 
	I0111 07:56:28.419499  287667 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fb8vka.dggbjkt10oymlx71 \
	I0111 07:56:28.419642  287667 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c84f368e6b8058d9b8f5e45cbb9f009ab0e06984b3c0d5786b368fcef9c9a8fc 
	I0111 07:56:28.422896  287667 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I0111 07:56:28.423065  287667 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 07:56:28.423100  287667 cni.go:84] Creating CNI manager for ""
	I0111 07:56:28.423110  287667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:56:28.425971  287667 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 07:56:24.701236  282961 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-086314" context rescaled to 1 replicas
	W0111 07:56:26.201323  282961 node_ready.go:57] node "old-k8s-version-086314" has "Ready":"False" status (will retry)
	W0111 07:56:28.203096  282961 node_ready.go:57] node "old-k8s-version-086314" has "Ready":"False" status (will retry)
	I0111 07:56:31.417114  295236 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 07:56:31.417193  295236 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 07:56:31.417410  295236 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 07:56:31.417484  295236 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I0111 07:56:31.417538  295236 kubeadm.go:319] OS: Linux
	I0111 07:56:31.417615  295236 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 07:56:31.417684  295236 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 07:56:31.417759  295236 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 07:56:31.417853  295236 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 07:56:31.417939  295236 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 07:56:31.418039  295236 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 07:56:31.418091  295236 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 07:56:31.418138  295236 kubeadm.go:319] CGROUPS_IO: enabled
	I0111 07:56:31.418229  295236 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 07:56:31.418326  295236 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 07:56:31.418414  295236 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 07:56:31.418476  295236 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 07:56:31.420075  295236 out.go:252]   - Generating certificates and keys ...
	I0111 07:56:31.420153  295236 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 07:56:31.420222  295236 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 07:56:31.420279  295236 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 07:56:31.420328  295236 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 07:56:31.420404  295236 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 07:56:31.420480  295236 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 07:56:31.420529  295236 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 07:56:31.420691  295236 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-285966 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 07:56:31.420755  295236 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 07:56:31.420907  295236 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-285966 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 07:56:31.420967  295236 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 07:56:31.421063  295236 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 07:56:31.421135  295236 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 07:56:31.421214  295236 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 07:56:31.421298  295236 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 07:56:31.421390  295236 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 07:56:31.421466  295236 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 07:56:31.421565  295236 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 07:56:31.421667  295236 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 07:56:31.421809  295236 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 07:56:31.421938  295236 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 07:56:31.423364  295236 out.go:252]   - Booting up control plane ...
	I0111 07:56:31.423449  295236 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 07:56:31.423523  295236 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 07:56:31.423587  295236 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 07:56:31.423686  295236 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 07:56:31.423787  295236 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 07:56:31.423919  295236 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 07:56:31.423994  295236 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 07:56:31.424029  295236 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 07:56:31.424144  295236 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 07:56:31.424237  295236 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 07:56:31.424287  295236 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.945409ms
	I0111 07:56:31.424388  295236 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0111 07:56:31.424530  295236 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0111 07:56:31.424640  295236 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0111 07:56:31.424707  295236 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0111 07:56:31.424778  295236 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004761147s
	I0111 07:56:31.424855  295236 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.133510666s
	I0111 07:56:31.424955  295236 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001635849s
	I0111 07:56:31.425093  295236 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 07:56:31.425237  295236 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 07:56:31.425294  295236 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 07:56:31.425479  295236 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-285966 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 07:56:31.425541  295236 kubeadm.go:319] [bootstrap-token] Using token: hd3jqa.mgd6bqyo5fl2k1ya
	I0111 07:56:31.426963  295236 out.go:252]   - Configuring RBAC rules ...
	I0111 07:56:31.427055  295236 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 07:56:31.427127  295236 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 07:56:31.427254  295236 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 07:56:31.427366  295236 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 07:56:31.427467  295236 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 07:56:31.427605  295236 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 07:56:31.427763  295236 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 07:56:31.427843  295236 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 07:56:31.427888  295236 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 07:56:31.427894  295236 kubeadm.go:319] 
	I0111 07:56:31.427965  295236 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 07:56:31.427972  295236 kubeadm.go:319] 
	I0111 07:56:31.428089  295236 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 07:56:31.428102  295236 kubeadm.go:319] 
	I0111 07:56:31.428138  295236 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 07:56:31.428229  295236 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 07:56:31.428308  295236 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 07:56:31.428319  295236 kubeadm.go:319] 
	I0111 07:56:31.428406  295236 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 07:56:31.428415  295236 kubeadm.go:319] 
	I0111 07:56:31.428472  295236 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 07:56:31.428478  295236 kubeadm.go:319] 
	I0111 07:56:31.428550  295236 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 07:56:31.428668  295236 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 07:56:31.428739  295236 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 07:56:31.428747  295236 kubeadm.go:319] 
	I0111 07:56:31.428870  295236 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 07:56:31.428989  295236 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 07:56:31.428998  295236 kubeadm.go:319] 
	I0111 07:56:31.429110  295236 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hd3jqa.mgd6bqyo5fl2k1ya \
	I0111 07:56:31.429285  295236 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c84f368e6b8058d9b8f5e45cbb9f009ab0e06984b3c0d5786b368fcef9c9a8fc \
	I0111 07:56:31.429317  295236 kubeadm.go:319] 	--control-plane 
	I0111 07:56:31.429326  295236 kubeadm.go:319] 
	I0111 07:56:31.429450  295236 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 07:56:31.429458  295236 kubeadm.go:319] 
	I0111 07:56:31.429545  295236 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hd3jqa.mgd6bqyo5fl2k1ya \
	I0111 07:56:31.429681  295236 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c84f368e6b8058d9b8f5e45cbb9f009ab0e06984b3c0d5786b368fcef9c9a8fc 
	I0111 07:56:31.429703  295236 cni.go:84] Creating CNI manager for ""
	I0111 07:56:31.429715  295236 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:56:31.431814  295236 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 07:56:31.432937  295236 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 07:56:31.437546  295236 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0111 07:56:31.437560  295236 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 07:56:31.450511  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 07:56:31.682040  295236 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 07:56:31.682164  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:31.682194  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-285966 minikube.k8s.io/updated_at=2026_01_11T07_56_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=embed-certs-285966 minikube.k8s.io/primary=true
	I0111 07:56:31.692977  295236 ops.go:34] apiserver oom_adj: -16
	I0111 07:56:31.772835  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:32.273414  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:28.427058  287667 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 07:56:28.433913  287667 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0111 07:56:28.433935  287667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 07:56:28.453009  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 07:56:28.698409  287667 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 07:56:28.698482  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:28.698537  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-875594 minikube.k8s.io/updated_at=2026_01_11T07_56_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=no-preload-875594 minikube.k8s.io/primary=true
	I0111 07:56:28.710579  287667 ops.go:34] apiserver oom_adj: -16
	I0111 07:56:28.772233  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:29.272592  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:29.772334  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:30.273021  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:30.773022  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:31.272401  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:31.773273  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:32.272380  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:32.772494  287667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:32.855280  287667 kubeadm.go:1114] duration metric: took 4.156858725s to wait for elevateKubeSystemPrivileges
	I0111 07:56:32.855314  287667 kubeadm.go:403] duration metric: took 12.331173543s to StartCluster
	I0111 07:56:32.855333  287667 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:32.855412  287667 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:56:32.856539  287667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:32.856816  287667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 07:56:32.856818  287667 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:56:32.856856  287667 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:56:32.856962  287667 addons.go:70] Setting storage-provisioner=true in profile "no-preload-875594"
	I0111 07:56:32.856989  287667 addons.go:239] Setting addon storage-provisioner=true in "no-preload-875594"
	I0111 07:56:32.856992  287667 addons.go:70] Setting default-storageclass=true in profile "no-preload-875594"
	I0111 07:56:32.857017  287667 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-875594"
	I0111 07:56:32.857024  287667 host.go:66] Checking if "no-preload-875594" exists ...
	I0111 07:56:32.857045  287667 config.go:182] Loaded profile config "no-preload-875594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:56:32.857386  287667 cli_runner.go:164] Run: docker container inspect no-preload-875594 --format={{.State.Status}}
	I0111 07:56:32.857548  287667 cli_runner.go:164] Run: docker container inspect no-preload-875594 --format={{.State.Status}}
	I0111 07:56:32.859352  287667 out.go:179] * Verifying Kubernetes components...
	I0111 07:56:32.861059  287667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:56:32.882257  287667 addons.go:239] Setting addon default-storageclass=true in "no-preload-875594"
	I0111 07:56:32.882308  287667 host.go:66] Checking if "no-preload-875594" exists ...
	I0111 07:56:32.882783  287667 cli_runner.go:164] Run: docker container inspect no-preload-875594 --format={{.State.Status}}
	I0111 07:56:32.883038  287667 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:56:32.886623  287667 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:56:32.886643  287667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:56:32.886701  287667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:56:32.910980  287667 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:56:32.911008  287667 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:56:32.911072  287667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:56:32.920426  287667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/no-preload-875594/id_rsa Username:docker}
	I0111 07:56:32.938094  287667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/no-preload-875594/id_rsa Username:docker}
	I0111 07:56:32.951055  287667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 07:56:33.005193  287667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:56:33.039199  287667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:56:33.057946  287667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:56:33.131506  287667 node_ready.go:35] waiting up to 6m0s for node "no-preload-875594" to be "Ready" ...
	I0111 07:56:33.131834  287667 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0111 07:56:33.363168  287667 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W0111 07:56:30.700278  282961 node_ready.go:57] node "old-k8s-version-086314" has "Ready":"False" status (will retry)
	W0111 07:56:32.700597  282961 node_ready.go:57] node "old-k8s-version-086314" has "Ready":"False" status (will retry)
	I0111 07:56:32.772883  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:33.272933  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:33.773599  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:34.273863  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:34.773137  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:35.273238  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:35.772983  295236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:56:35.850126  295236 kubeadm.go:1114] duration metric: took 4.168020406s to wait for elevateKubeSystemPrivileges
	I0111 07:56:35.850167  295236 kubeadm.go:403] duration metric: took 12.783182276s to StartCluster
	I0111 07:56:35.850190  295236 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:35.850261  295236 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:56:35.852051  295236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:35.852258  295236 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 07:56:35.852283  295236 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:56:35.852353  295236 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-285966"
	I0111 07:56:35.852262  295236 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:56:35.852377  295236 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-285966"
	I0111 07:56:35.852413  295236 host.go:66] Checking if "embed-certs-285966" exists ...
	I0111 07:56:35.852421  295236 addons.go:70] Setting default-storageclass=true in profile "embed-certs-285966"
	I0111 07:56:35.852503  295236 config.go:182] Loaded profile config "embed-certs-285966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:56:35.852507  295236 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-285966"
	I0111 07:56:35.852879  295236 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:56:35.853064  295236 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:56:35.854017  295236 out.go:179] * Verifying Kubernetes components...
	I0111 07:56:35.855367  295236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:56:35.880087  295236 addons.go:239] Setting addon default-storageclass=true in "embed-certs-285966"
	I0111 07:56:35.880125  295236 host.go:66] Checking if "embed-certs-285966" exists ...
	I0111 07:56:35.880497  295236 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:56:35.881268  295236 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:56:35.882580  295236 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:56:35.882602  295236 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:56:35.882656  295236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:56:35.916114  295236 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:56:35.916140  295236 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:56:35.916345  295236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:56:35.918203  295236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:56:35.942854  295236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:56:35.965012  295236 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 07:56:36.023931  295236 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:56:36.048740  295236 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:56:36.065895  295236 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:56:36.179598  295236 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0111 07:56:36.184426  295236 node_ready.go:35] waiting up to 6m0s for node "embed-certs-285966" to be "Ready" ...
	I0111 07:56:36.387725  295236 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0111 07:56:36.388625  295236 addons.go:530] duration metric: took 536.34365ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0111 07:56:36.686269  295236 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-285966" context rescaled to 1 replicas
	I0111 07:56:33.364512  287667 addons.go:530] duration metric: took 507.6604ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0111 07:56:33.636180  287667 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-875594" context rescaled to 1 replicas
	W0111 07:56:35.134702  287667 node_ready.go:57] node "no-preload-875594" has "Ready":"False" status (will retry)
	W0111 07:56:37.135167  287667 node_ready.go:57] node "no-preload-875594" has "Ready":"False" status (will retry)
	W0111 07:56:34.700648  282961 node_ready.go:57] node "old-k8s-version-086314" has "Ready":"False" status (will retry)
	I0111 07:56:36.700824  282961 node_ready.go:49] node "old-k8s-version-086314" is "Ready"
	I0111 07:56:36.700857  282961 node_ready.go:38] duration metric: took 12.503581207s for node "old-k8s-version-086314" to be "Ready" ...
	I0111 07:56:36.700874  282961 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:56:36.700942  282961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:56:36.715788  282961 api_server.go:72] duration metric: took 13.02731593s to wait for apiserver process to appear ...
	I0111 07:56:36.715844  282961 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:56:36.715866  282961 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:56:36.721087  282961 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0111 07:56:36.722407  282961 api_server.go:141] control plane version: v1.28.0
	I0111 07:56:36.722433  282961 api_server.go:131] duration metric: took 6.581419ms to wait for apiserver health ...
	I0111 07:56:36.722444  282961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:56:36.727273  282961 system_pods.go:59] 8 kube-system pods found
	I0111 07:56:36.727317  282961 system_pods.go:61] "coredns-5dd5756b68-797jn" [ac6485e5-f57c-40c4-aa87-a724ccfd7fd3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:36.727327  282961 system_pods.go:61] "etcd-old-k8s-version-086314" [dc8173a2-cd5c-4fd5-8bc3-167b8214b512] Running
	I0111 07:56:36.727347  282961 system_pods.go:61] "kindnet-9sfd9" [efc9328f-933c-4a16-a505-312eb58fdb57] Running
	I0111 07:56:36.727358  282961 system_pods.go:61] "kube-apiserver-old-k8s-version-086314" [6c910d3a-ed92-4c51-b482-fb1a13ebe4c2] Running
	I0111 07:56:36.727367  282961 system_pods.go:61] "kube-controller-manager-old-k8s-version-086314" [e235516a-7365-4893-adca-12c49d8f33ca] Running
	I0111 07:56:36.727372  282961 system_pods.go:61] "kube-proxy-dl9xq" [49020e24-0f40-4ef2-93a3-d3d2938acea7] Running
	I0111 07:56:36.727378  282961 system_pods.go:61] "kube-scheduler-old-k8s-version-086314" [d5bc68e0-ee93-45db-81eb-4eee164c7a88] Running
	I0111 07:56:36.727382  282961 system_pods.go:61] "storage-provisioner" [56946fe8-d095-4fe2-8742-0dc984828edf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:56:36.727392  282961 system_pods.go:74] duration metric: took 4.942612ms to wait for pod list to return data ...
	I0111 07:56:36.727401  282961 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:56:36.730047  282961 default_sa.go:45] found service account: "default"
	I0111 07:56:36.730071  282961 default_sa.go:55] duration metric: took 2.663214ms for default service account to be created ...
	I0111 07:56:36.730081  282961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:56:36.734006  282961 system_pods.go:86] 8 kube-system pods found
	I0111 07:56:36.734041  282961 system_pods.go:89] "coredns-5dd5756b68-797jn" [ac6485e5-f57c-40c4-aa87-a724ccfd7fd3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:36.734049  282961 system_pods.go:89] "etcd-old-k8s-version-086314" [dc8173a2-cd5c-4fd5-8bc3-167b8214b512] Running
	I0111 07:56:36.734056  282961 system_pods.go:89] "kindnet-9sfd9" [efc9328f-933c-4a16-a505-312eb58fdb57] Running
	I0111 07:56:36.734063  282961 system_pods.go:89] "kube-apiserver-old-k8s-version-086314" [6c910d3a-ed92-4c51-b482-fb1a13ebe4c2] Running
	I0111 07:56:36.734069  282961 system_pods.go:89] "kube-controller-manager-old-k8s-version-086314" [e235516a-7365-4893-adca-12c49d8f33ca] Running
	I0111 07:56:36.734074  282961 system_pods.go:89] "kube-proxy-dl9xq" [49020e24-0f40-4ef2-93a3-d3d2938acea7] Running
	I0111 07:56:36.734079  282961 system_pods.go:89] "kube-scheduler-old-k8s-version-086314" [d5bc68e0-ee93-45db-81eb-4eee164c7a88] Running
	I0111 07:56:36.734086  282961 system_pods.go:89] "storage-provisioner" [56946fe8-d095-4fe2-8742-0dc984828edf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:56:36.734128  282961 retry.go:84] will retry after 300ms: missing components: kube-dns
	I0111 07:56:37.048673  282961 system_pods.go:86] 8 kube-system pods found
	I0111 07:56:37.048713  282961 system_pods.go:89] "coredns-5dd5756b68-797jn" [ac6485e5-f57c-40c4-aa87-a724ccfd7fd3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:37.048726  282961 system_pods.go:89] "etcd-old-k8s-version-086314" [dc8173a2-cd5c-4fd5-8bc3-167b8214b512] Running
	I0111 07:56:37.048734  282961 system_pods.go:89] "kindnet-9sfd9" [efc9328f-933c-4a16-a505-312eb58fdb57] Running
	I0111 07:56:37.048740  282961 system_pods.go:89] "kube-apiserver-old-k8s-version-086314" [6c910d3a-ed92-4c51-b482-fb1a13ebe4c2] Running
	I0111 07:56:37.048746  282961 system_pods.go:89] "kube-controller-manager-old-k8s-version-086314" [e235516a-7365-4893-adca-12c49d8f33ca] Running
	I0111 07:56:37.048751  282961 system_pods.go:89] "kube-proxy-dl9xq" [49020e24-0f40-4ef2-93a3-d3d2938acea7] Running
	I0111 07:56:37.048757  282961 system_pods.go:89] "kube-scheduler-old-k8s-version-086314" [d5bc68e0-ee93-45db-81eb-4eee164c7a88] Running
	I0111 07:56:37.048769  282961 system_pods.go:89] "storage-provisioner" [56946fe8-d095-4fe2-8742-0dc984828edf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:56:37.382002  282961 system_pods.go:86] 8 kube-system pods found
	I0111 07:56:37.382052  282961 system_pods.go:89] "coredns-5dd5756b68-797jn" [ac6485e5-f57c-40c4-aa87-a724ccfd7fd3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:37.382061  282961 system_pods.go:89] "etcd-old-k8s-version-086314" [dc8173a2-cd5c-4fd5-8bc3-167b8214b512] Running
	I0111 07:56:37.382068  282961 system_pods.go:89] "kindnet-9sfd9" [efc9328f-933c-4a16-a505-312eb58fdb57] Running
	I0111 07:56:37.382074  282961 system_pods.go:89] "kube-apiserver-old-k8s-version-086314" [6c910d3a-ed92-4c51-b482-fb1a13ebe4c2] Running
	I0111 07:56:37.382084  282961 system_pods.go:89] "kube-controller-manager-old-k8s-version-086314" [e235516a-7365-4893-adca-12c49d8f33ca] Running
	I0111 07:56:37.382089  282961 system_pods.go:89] "kube-proxy-dl9xq" [49020e24-0f40-4ef2-93a3-d3d2938acea7] Running
	I0111 07:56:37.382094  282961 system_pods.go:89] "kube-scheduler-old-k8s-version-086314" [d5bc68e0-ee93-45db-81eb-4eee164c7a88] Running
	I0111 07:56:37.382112  282961 system_pods.go:89] "storage-provisioner" [56946fe8-d095-4fe2-8742-0dc984828edf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:56:37.787289  282961 system_pods.go:86] 8 kube-system pods found
	I0111 07:56:37.787323  282961 system_pods.go:89] "coredns-5dd5756b68-797jn" [ac6485e5-f57c-40c4-aa87-a724ccfd7fd3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:37.787333  282961 system_pods.go:89] "etcd-old-k8s-version-086314" [dc8173a2-cd5c-4fd5-8bc3-167b8214b512] Running
	I0111 07:56:37.787338  282961 system_pods.go:89] "kindnet-9sfd9" [efc9328f-933c-4a16-a505-312eb58fdb57] Running
	I0111 07:56:37.787345  282961 system_pods.go:89] "kube-apiserver-old-k8s-version-086314" [6c910d3a-ed92-4c51-b482-fb1a13ebe4c2] Running
	I0111 07:56:37.787349  282961 system_pods.go:89] "kube-controller-manager-old-k8s-version-086314" [e235516a-7365-4893-adca-12c49d8f33ca] Running
	I0111 07:56:37.787352  282961 system_pods.go:89] "kube-proxy-dl9xq" [49020e24-0f40-4ef2-93a3-d3d2938acea7] Running
	I0111 07:56:37.787356  282961 system_pods.go:89] "kube-scheduler-old-k8s-version-086314" [d5bc68e0-ee93-45db-81eb-4eee164c7a88] Running
	I0111 07:56:37.787363  282961 system_pods.go:89] "storage-provisioner" [56946fe8-d095-4fe2-8742-0dc984828edf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:56:38.352234  282961 system_pods.go:86] 8 kube-system pods found
	I0111 07:56:38.352265  282961 system_pods.go:89] "coredns-5dd5756b68-797jn" [ac6485e5-f57c-40c4-aa87-a724ccfd7fd3] Running
	I0111 07:56:38.352270  282961 system_pods.go:89] "etcd-old-k8s-version-086314" [dc8173a2-cd5c-4fd5-8bc3-167b8214b512] Running
	I0111 07:56:38.352274  282961 system_pods.go:89] "kindnet-9sfd9" [efc9328f-933c-4a16-a505-312eb58fdb57] Running
	I0111 07:56:38.352277  282961 system_pods.go:89] "kube-apiserver-old-k8s-version-086314" [6c910d3a-ed92-4c51-b482-fb1a13ebe4c2] Running
	I0111 07:56:38.352280  282961 system_pods.go:89] "kube-controller-manager-old-k8s-version-086314" [e235516a-7365-4893-adca-12c49d8f33ca] Running
	I0111 07:56:38.352283  282961 system_pods.go:89] "kube-proxy-dl9xq" [49020e24-0f40-4ef2-93a3-d3d2938acea7] Running
	I0111 07:56:38.352286  282961 system_pods.go:89] "kube-scheduler-old-k8s-version-086314" [d5bc68e0-ee93-45db-81eb-4eee164c7a88] Running
	I0111 07:56:38.352289  282961 system_pods.go:89] "storage-provisioner" [56946fe8-d095-4fe2-8742-0dc984828edf] Running
	I0111 07:56:38.352295  282961 system_pods.go:126] duration metric: took 1.622209293s to wait for k8s-apps to be running ...
	I0111 07:56:38.352302  282961 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:56:38.352352  282961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:56:38.366787  282961 system_svc.go:56] duration metric: took 14.476258ms WaitForService to wait for kubelet
	I0111 07:56:38.366834  282961 kubeadm.go:587] duration metric: took 14.678366749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:56:38.366857  282961 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:56:38.369621  282961 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:56:38.369650  282961 node_conditions.go:123] node cpu capacity is 8
	I0111 07:56:38.369674  282961 node_conditions.go:105] duration metric: took 2.803598ms to run NodePressure ...
	I0111 07:56:38.369686  282961 start.go:242] waiting for startup goroutines ...
	I0111 07:56:38.369697  282961 start.go:247] waiting for cluster config update ...
	I0111 07:56:38.369711  282961 start.go:256] writing updated cluster config ...
	I0111 07:56:38.396468  282961 ssh_runner.go:195] Run: rm -f paused
	I0111 07:56:38.400873  282961 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:56:38.405334  282961 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-797jn" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:38.409519  282961 pod_ready.go:94] pod "coredns-5dd5756b68-797jn" is "Ready"
	I0111 07:56:38.409541  282961 pod_ready.go:86] duration metric: took 4.186119ms for pod "coredns-5dd5756b68-797jn" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:38.412020  282961 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-086314" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:38.415747  282961 pod_ready.go:94] pod "etcd-old-k8s-version-086314" is "Ready"
	I0111 07:56:38.415768  282961 pod_ready.go:86] duration metric: took 3.728913ms for pod "etcd-old-k8s-version-086314" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:38.418200  282961 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-086314" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:38.421940  282961 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-086314" is "Ready"
	I0111 07:56:38.421956  282961 pod_ready.go:86] duration metric: took 3.738834ms for pod "kube-apiserver-old-k8s-version-086314" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:38.424250  282961 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-086314" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:38.805281  282961 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-086314" is "Ready"
	I0111 07:56:38.805308  282961 pod_ready.go:86] duration metric: took 381.036077ms for pod "kube-controller-manager-old-k8s-version-086314" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:39.005674  282961 pod_ready.go:83] waiting for pod "kube-proxy-dl9xq" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:39.405813  282961 pod_ready.go:94] pod "kube-proxy-dl9xq" is "Ready"
	I0111 07:56:39.405836  282961 pod_ready.go:86] duration metric: took 400.132397ms for pod "kube-proxy-dl9xq" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:39.605332  282961 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-086314" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:40.004869  282961 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-086314" is "Ready"
	I0111 07:56:40.004894  282961 pod_ready.go:86] duration metric: took 399.539896ms for pod "kube-scheduler-old-k8s-version-086314" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:40.004907  282961 pod_ready.go:40] duration metric: took 1.603990223s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:56:40.048640  282961 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I0111 07:56:40.050334  282961 out.go:203] 
	W0111 07:56:40.051827  282961 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I0111 07:56:40.053064  282961 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0111 07:56:40.054758  282961 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-086314" cluster and "default" namespace by default
	W0111 07:56:38.187351  295236 node_ready.go:57] node "embed-certs-285966" has "Ready":"False" status (will retry)
	W0111 07:56:40.188042  295236 node_ready.go:57] node "embed-certs-285966" has "Ready":"False" status (will retry)
	W0111 07:56:39.634285  287667 node_ready.go:57] node "no-preload-875594" has "Ready":"False" status (will retry)
	W0111 07:56:41.634943  287667 node_ready.go:57] node "no-preload-875594" has "Ready":"False" status (will retry)
	W0111 07:56:42.688108  295236 node_ready.go:57] node "embed-certs-285966" has "Ready":"False" status (will retry)
	W0111 07:56:45.187415  295236 node_ready.go:57] node "embed-certs-285966" has "Ready":"False" status (will retry)
	W0111 07:56:44.135373  287667 node_ready.go:57] node "no-preload-875594" has "Ready":"False" status (will retry)
	I0111 07:56:45.634455  287667 node_ready.go:49] node "no-preload-875594" is "Ready"
	I0111 07:56:45.634486  287667 node_ready.go:38] duration metric: took 12.502951857s for node "no-preload-875594" to be "Ready" ...
	I0111 07:56:45.634505  287667 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:56:45.634561  287667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:56:45.648592  287667 api_server.go:72] duration metric: took 12.791738713s to wait for apiserver process to appear ...
	I0111 07:56:45.648616  287667 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:56:45.648643  287667 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 07:56:45.654696  287667 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0111 07:56:45.655742  287667 api_server.go:141] control plane version: v1.35.0
	I0111 07:56:45.655762  287667 api_server.go:131] duration metric: took 7.141226ms to wait for apiserver health ...
	I0111 07:56:45.655771  287667 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:56:45.659024  287667 system_pods.go:59] 8 kube-system pods found
	I0111 07:56:45.659053  287667 system_pods.go:61] "coredns-7d764666f9-gw9b6" [fab33c8a-a60a-4148-9d33-607b857a3867] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:45.659059  287667 system_pods.go:61] "etcd-no-preload-875594" [b5c2e76d-30c8-4b1f-9d33-bf17408c89d0] Running
	I0111 07:56:45.659067  287667 system_pods.go:61] "kindnet-p7w57" [f7547d3e-e98d-42af-a0cc-557386ef7efa] Running
	I0111 07:56:45.659077  287667 system_pods.go:61] "kube-apiserver-no-preload-875594" [231873ff-977b-4cce-9b1f-5b54c160ed65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:56:45.659086  287667 system_pods.go:61] "kube-controller-manager-no-preload-875594" [b9c4c8b6-049a-4281-9e88-8e216de6745e] Running
	I0111 07:56:45.659093  287667 system_pods.go:61] "kube-proxy-49b4x" [c9fb5a8d-61b1-4674-aafc-a24c6c468eee] Running
	I0111 07:56:45.659103  287667 system_pods.go:61] "kube-scheduler-no-preload-875594" [6e6d1393-a91c-4c18-b5aa-47716fc63a9c] Running
	I0111 07:56:45.659113  287667 system_pods.go:61] "storage-provisioner" [5e5ad34f-7c3c-4f85-8280-d79b3586001e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:56:45.659123  287667 system_pods.go:74] duration metric: took 3.346133ms to wait for pod list to return data ...
	I0111 07:56:45.659135  287667 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:56:45.661657  287667 default_sa.go:45] found service account: "default"
	I0111 07:56:45.661683  287667 default_sa.go:55] duration metric: took 2.540515ms for default service account to be created ...
	I0111 07:56:45.661694  287667 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:56:45.664459  287667 system_pods.go:86] 8 kube-system pods found
	I0111 07:56:45.664489  287667 system_pods.go:89] "coredns-7d764666f9-gw9b6" [fab33c8a-a60a-4148-9d33-607b857a3867] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:56:45.664497  287667 system_pods.go:89] "etcd-no-preload-875594" [b5c2e76d-30c8-4b1f-9d33-bf17408c89d0] Running
	I0111 07:56:45.664505  287667 system_pods.go:89] "kindnet-p7w57" [f7547d3e-e98d-42af-a0cc-557386ef7efa] Running
	I0111 07:56:45.664514  287667 system_pods.go:89] "kube-apiserver-no-preload-875594" [231873ff-977b-4cce-9b1f-5b54c160ed65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:56:45.664523  287667 system_pods.go:89] "kube-controller-manager-no-preload-875594" [b9c4c8b6-049a-4281-9e88-8e216de6745e] Running
	I0111 07:56:45.664530  287667 system_pods.go:89] "kube-proxy-49b4x" [c9fb5a8d-61b1-4674-aafc-a24c6c468eee] Running
	I0111 07:56:45.664539  287667 system_pods.go:89] "kube-scheduler-no-preload-875594" [6e6d1393-a91c-4c18-b5aa-47716fc63a9c] Running
	I0111 07:56:45.664546  287667 system_pods.go:89] "storage-provisioner" [5e5ad34f-7c3c-4f85-8280-d79b3586001e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:56:45.664588  287667 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0111 07:56:45.903093  287667 system_pods.go:86] 8 kube-system pods found
	I0111 07:56:45.903118  287667 system_pods.go:89] "coredns-7d764666f9-gw9b6" [fab33c8a-a60a-4148-9d33-607b857a3867] Running
	I0111 07:56:45.903124  287667 system_pods.go:89] "etcd-no-preload-875594" [b5c2e76d-30c8-4b1f-9d33-bf17408c89d0] Running
	I0111 07:56:45.903127  287667 system_pods.go:89] "kindnet-p7w57" [f7547d3e-e98d-42af-a0cc-557386ef7efa] Running
	I0111 07:56:45.903135  287667 system_pods.go:89] "kube-apiserver-no-preload-875594" [231873ff-977b-4cce-9b1f-5b54c160ed65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:56:45.903141  287667 system_pods.go:89] "kube-controller-manager-no-preload-875594" [b9c4c8b6-049a-4281-9e88-8e216de6745e] Running
	I0111 07:56:45.903146  287667 system_pods.go:89] "kube-proxy-49b4x" [c9fb5a8d-61b1-4674-aafc-a24c6c468eee] Running
	I0111 07:56:45.903149  287667 system_pods.go:89] "kube-scheduler-no-preload-875594" [6e6d1393-a91c-4c18-b5aa-47716fc63a9c] Running
	I0111 07:56:45.903153  287667 system_pods.go:89] "storage-provisioner" [5e5ad34f-7c3c-4f85-8280-d79b3586001e] Running
	I0111 07:56:45.903163  287667 system_pods.go:126] duration metric: took 241.459654ms to wait for k8s-apps to be running ...
	I0111 07:56:45.903175  287667 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:56:45.903219  287667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:56:45.917066  287667 system_svc.go:56] duration metric: took 13.883174ms WaitForService to wait for kubelet
	I0111 07:56:45.917091  287667 kubeadm.go:587] duration metric: took 13.060244504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:56:45.917108  287667 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:56:45.919493  287667 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:56:45.919512  287667 node_conditions.go:123] node cpu capacity is 8
	I0111 07:56:45.919529  287667 node_conditions.go:105] duration metric: took 2.417183ms to run NodePressure ...
	I0111 07:56:45.919540  287667 start.go:242] waiting for startup goroutines ...
	I0111 07:56:45.919548  287667 start.go:247] waiting for cluster config update ...
	I0111 07:56:45.919557  287667 start.go:256] writing updated cluster config ...
	I0111 07:56:45.919774  287667 ssh_runner.go:195] Run: rm -f paused
	I0111 07:56:45.924224  287667 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:56:45.929605  287667 pod_ready.go:83] waiting for pod "coredns-7d764666f9-gw9b6" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:45.933792  287667 pod_ready.go:94] pod "coredns-7d764666f9-gw9b6" is "Ready"
	I0111 07:56:45.933902  287667 pod_ready.go:86] duration metric: took 4.270327ms for pod "coredns-7d764666f9-gw9b6" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:45.935832  287667 pod_ready.go:83] waiting for pod "etcd-no-preload-875594" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:45.939610  287667 pod_ready.go:94] pod "etcd-no-preload-875594" is "Ready"
	I0111 07:56:45.939633  287667 pod_ready.go:86] duration metric: took 3.781242ms for pod "etcd-no-preload-875594" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:45.941435  287667 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-875594" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:47.446747  287667 pod_ready.go:94] pod "kube-apiserver-no-preload-875594" is "Ready"
	I0111 07:56:47.446769  287667 pod_ready.go:86] duration metric: took 1.505313315s for pod "kube-apiserver-no-preload-875594" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:47.448756  287667 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-875594" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:47.528569  287667 pod_ready.go:94] pod "kube-controller-manager-no-preload-875594" is "Ready"
	I0111 07:56:47.528595  287667 pod_ready.go:86] duration metric: took 79.809211ms for pod "kube-controller-manager-no-preload-875594" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:47.729010  287667 pod_ready.go:83] waiting for pod "kube-proxy-49b4x" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:48.128957  287667 pod_ready.go:94] pod "kube-proxy-49b4x" is "Ready"
	I0111 07:56:48.128990  287667 pod_ready.go:86] duration metric: took 399.952044ms for pod "kube-proxy-49b4x" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:48.330075  287667 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-875594" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:48.729174  287667 pod_ready.go:94] pod "kube-scheduler-no-preload-875594" is "Ready"
	I0111 07:56:48.729209  287667 pod_ready.go:86] duration metric: took 399.108718ms for pod "kube-scheduler-no-preload-875594" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:56:48.729224  287667 pod_ready.go:40] duration metric: took 2.804969074s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:56:48.788168  287667 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0111 07:56:48.789781  287667 out.go:179] * Done! kubectl is now configured to use "no-preload-875594" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 07:56:36 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:36.945223496Z" level=info msg="Starting container: 38cc46cfa0fc88ea4035aa691c5764e9b6a39c48cceab77b015cad7a815c3aac" id=bf64703b-35b5-4eb6-bd0b-db0953a9520a name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:56:36 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:36.947955239Z" level=info msg="Started container" PID=2152 containerID=38cc46cfa0fc88ea4035aa691c5764e9b6a39c48cceab77b015cad7a815c3aac description=kube-system/coredns-5dd5756b68-797jn/coredns id=bf64703b-35b5-4eb6-bd0b-db0953a9520a name=/runtime.v1.RuntimeService/StartContainer sandboxID=6522e821c7007d53076a2e05652c16edba61e13389fab44425c73e0e54279919
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.508198355Z" level=info msg="Running pod sandbox: default/busybox/POD" id=41682541-1e76-49c8-b915-4f6525f8540e name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.50827279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.512938695Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7c554935ff7ee6994893ac1bafd838ee5ffb6a59d9a25492d7ff94544a1b3106 UID:e83aa813-f5e9-4248-9355-54c56b9cd30d NetNS:/var/run/netns/f8832e24-7627-4916-ac1e-fdd93841be28 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007205d0}] Aliases:map[]}"
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.512976302Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.528719511Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7c554935ff7ee6994893ac1bafd838ee5ffb6a59d9a25492d7ff94544a1b3106 UID:e83aa813-f5e9-4248-9355-54c56b9cd30d NetNS:/var/run/netns/f8832e24-7627-4916-ac1e-fdd93841be28 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007205d0}] Aliases:map[]}"
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.528866132Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.529610274Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.530582856Z" level=info msg="Ran pod sandbox 7c554935ff7ee6994893ac1bafd838ee5ffb6a59d9a25492d7ff94544a1b3106 with infra container: default/busybox/POD" id=41682541-1e76-49c8-b915-4f6525f8540e name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.53181914Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e234dfef-ca21-4cbb-a773-6cb1d9621575 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.531959545Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e234dfef-ca21-4cbb-a773-6cb1d9621575 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.532023404Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e234dfef-ca21-4cbb-a773-6cb1d9621575 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.532475933Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=26e6b2e4-2b56-4bd1-b968-10e63f8874d7 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:56:40 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:40.532813408Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 11 07:56:41 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:41.73067878Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=26e6b2e4-2b56-4bd1-b968-10e63f8874d7 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:56:41 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:41.73156881Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a2829bea-7cda-4480-a838-072d7270f92a name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:41 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:41.733303428Z" level=info msg="Creating container: default/busybox/busybox" id=571ada4e-8cb3-4c10-bf84-e20629b96033 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:56:41 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:41.733476292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:41 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:41.737398159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:41 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:41.737898438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:41 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:41.773441509Z" level=info msg="Created container 9304d754a77c025feb6d7b461261eaef3729d29984bf123d87934c4c9da915fa: default/busybox/busybox" id=571ada4e-8cb3-4c10-bf84-e20629b96033 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:56:41 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:41.774131548Z" level=info msg="Starting container: 9304d754a77c025feb6d7b461261eaef3729d29984bf123d87934c4c9da915fa" id=ecf2af58-4b86-4259-9995-3c479a328c33 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:56:41 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:41.776428604Z" level=info msg="Started container" PID=2228 containerID=9304d754a77c025feb6d7b461261eaef3729d29984bf123d87934c4c9da915fa description=default/busybox/busybox id=ecf2af58-4b86-4259-9995-3c479a328c33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c554935ff7ee6994893ac1bafd838ee5ffb6a59d9a25492d7ff94544a1b3106
	Jan 11 07:56:48 old-k8s-version-086314 crio[773]: time="2026-01-11T07:56:48.300243009Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	9304d754a77c0       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   7c554935ff7ee       busybox                                          default
	38cc46cfa0fc8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   6522e821c7007       coredns-5dd5756b68-797jn                         kube-system
	d82b9ab625de5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   ace079112f683       storage-provisioner                              kube-system
	d447d06b8760c       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   65a2ad6f050c7       kindnet-9sfd9                                    kube-system
	7b3a6915da75f       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   3cecbf5dfb332       kube-proxy-dl9xq                                 kube-system
	dd6186b72c462       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   11ee41db2aa35       kube-apiserver-old-k8s-version-086314            kube-system
	dc69de19c8c11       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   3ae93bad14875       etcd-old-k8s-version-086314                      kube-system
	05edb98cfb1fc       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   38071047d2ca4       kube-controller-manager-old-k8s-version-086314   kube-system
	b37bcc571dd91       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   19137e488f009       kube-scheduler-old-k8s-version-086314            kube-system
	
	
	==> coredns [38cc46cfa0fc88ea4035aa691c5764e9b6a39c48cceab77b015cad7a815c3aac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53063 - 58411 "HINFO IN 7692178361938538649.8389850399825297278. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032155846s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-086314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-086314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=old-k8s-version-086314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_56_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:56:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-086314
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:56:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:56:41 +0000   Sun, 11 Jan 2026 07:56:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:56:41 +0000   Sun, 11 Jan 2026 07:56:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:56:41 +0000   Sun, 11 Jan 2026 07:56:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:56:41 +0000   Sun, 11 Jan 2026 07:56:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-086314
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                d90f486e-fdcb-4fb9-aaa3-f827ad3b5d21
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-797jn                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-086314                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-9sfd9                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-old-k8s-version-086314             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-086314    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-dl9xq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-086314             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node old-k8s-version-086314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-086314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node old-k8s-version-086314 event: Registered Node old-k8s-version-086314 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-086314 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [dc69de19c8c116bb41f12a5efaca0e0e2c25ee82a8f8208c221bdb3a1b51955a] <==
	{"level":"info","ts":"2026-01-11T07:56:05.680605Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T07:56:05.680683Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T07:56:06.060159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-11T07:56:06.060214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-11T07:56:06.060248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2026-01-11T07:56:06.060271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:06.060296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:06.060308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:06.060318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:06.063032Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T07:56:06.063778Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-086314 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:56:06.064033Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T07:56:06.064184Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:56:06.067692Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:56:06.067214Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:56:06.067256Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:56:06.069415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:56:06.072043Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T07:56:06.073061Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T07:56:06.074578Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2026-01-11T07:56:11.47396Z","caller":"traceutil/trace.go:171","msg":"trace[788721024] transaction","detail":"{read_only:false; response_revision:292; number_of_response:1; }","duration":"107.638227ms","start":"2026-01-11T07:56:11.366293Z","end":"2026-01-11T07:56:11.473931Z","steps":["trace[788721024] 'process raft request'  (duration: 107.462302ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-11T07:56:11.770178Z","caller":"traceutil/trace.go:171","msg":"trace[1808709851] transaction","detail":"{read_only:false; response_revision:293; number_of_response:1; }","duration":"172.185225ms","start":"2026-01-11T07:56:11.597965Z","end":"2026-01-11T07:56:11.770151Z","steps":["trace[1808709851] 'process raft request'  (duration: 171.933312ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-11T07:56:17.597702Z","caller":"traceutil/trace.go:171","msg":"trace[1337831881] transaction","detail":"{read_only:false; response_revision:315; number_of_response:1; }","duration":"157.265972ms","start":"2026-01-11T07:56:17.440414Z","end":"2026-01-11T07:56:17.59768Z","steps":["trace[1337831881] 'process raft request'  (duration: 157.142514ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-11T07:56:18.228516Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.657907ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873791290365222385 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-kzs57abq7sxrv4qa5gz634ihfi\" mod_revision:18 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-kzs57abq7sxrv4qa5gz634ihfi\" value_size:612 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-kzs57abq7sxrv4qa5gz634ihfi\" > >>","response":"size:16"}
	{"level":"info","ts":"2026-01-11T07:56:18.228655Z","caller":"traceutil/trace.go:171","msg":"trace[1041180985] transaction","detail":"{read_only:false; response_revision:317; number_of_response:1; }","duration":"224.352709ms","start":"2026-01-11T07:56:18.004278Z","end":"2026-01-11T07:56:18.228631Z","steps":["trace[1041180985] 'process raft request'  (duration: 47.044214ms)","trace[1041180985] 'compare'  (duration: 176.531124ms)"],"step_count":2}
	
	
	==> kernel <==
	 07:56:50 up 39 min,  0 user,  load average: 3.67, 3.38, 2.38
	Linux old-k8s-version-086314 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d447d06b8760cb31c1b1c401b00fd24910c126952faf3dfb98955047614c9b90] <==
	I0111 07:56:26.006951       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:56:26.007224       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0111 07:56:26.007390       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:56:26.007416       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:56:26.007449       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:56:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:56:26.304563       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:56:26.304638       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:56:26.304651       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:56:26.304823       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:56:26.705198       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:56:26.705261       1 metrics.go:72] Registering metrics
	I0111 07:56:26.705453       1 controller.go:711] "Syncing nftables rules"
	I0111 07:56:36.306964       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:56:36.307022       1 main.go:301] handling current node
	I0111 07:56:46.308045       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:56:46.308077       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dd6186b72c462e67c2aa33d1abb1f4d285e3fd92792878936d11ae9ae36e83cf] <==
	I0111 07:56:07.745092       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0111 07:56:07.745583       1 controller.go:624] quota admission added evaluator for: namespaces
	I0111 07:56:07.747083       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0111 07:56:07.748476       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0111 07:56:07.748591       1 aggregator.go:166] initial CRD sync complete...
	I0111 07:56:07.748635       1 autoregister_controller.go:141] Starting autoregister controller
	I0111 07:56:07.748675       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 07:56:07.748705       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:56:07.763453       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:56:07.777569       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0111 07:56:08.650437       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0111 07:56:08.654303       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0111 07:56:08.654325       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0111 07:56:09.152518       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:56:09.194185       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:56:09.264312       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 07:56:09.276494       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0111 07:56:09.277902       1 controller.go:624] quota admission added evaluator for: endpoints
	I0111 07:56:09.285219       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:56:09.701874       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0111 07:56:10.629566       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0111 07:56:10.640792       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 07:56:10.651724       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0111 07:56:23.562060       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0111 07:56:23.675870       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [05edb98cfb1fca22ee6901b700258b2db17451e0664734161a7200246198c18d] <==
	I0111 07:56:23.697431       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9sfd9"
	I0111 07:56:23.724967       1 shared_informer.go:318] Caches are synced for taint
	I0111 07:56:23.725102       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0111 07:56:23.725254       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-086314"
	I0111 07:56:23.725311       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0111 07:56:23.726594       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0111 07:56:23.726661       1 taint_manager.go:211] "Sending events to api server"
	I0111 07:56:23.727671       1 event.go:307] "Event occurred" object="old-k8s-version-086314" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-086314 event: Registered Node old-k8s-version-086314 in Controller"
	I0111 07:56:23.728660       1 shared_informer.go:318] Caches are synced for resource quota
	I0111 07:56:23.779710       1 event.go:307] "Event occurred" object="kube-system/etcd-old-k8s-version-086314" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0111 07:56:23.780945       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-086314" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0111 07:56:23.799690       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-086314" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0111 07:56:24.049631       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 07:56:24.049675       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0111 07:56:24.103139       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 07:56:24.231870       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0111 07:56:24.241696       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-bpf47"
	I0111 07:56:24.249236       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.673106ms"
	I0111 07:56:24.256970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.660486ms"
	I0111 07:56:24.257099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.874µs"
	I0111 07:56:36.572731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.56µs"
	I0111 07:56:36.588084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.868µs"
	I0111 07:56:37.863428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.656113ms"
	I0111 07:56:37.863539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.295µs"
	I0111 07:56:38.728357       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [7b3a6915da75f68365d6b65037220000b122b84fce362b80ab02ed5dafdca4df] <==
	I0111 07:56:24.175003       1 server_others.go:69] "Using iptables proxy"
	I0111 07:56:24.188330       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I0111 07:56:24.217621       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:56:24.221954       1 server_others.go:152] "Using iptables Proxier"
	I0111 07:56:24.222009       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0111 07:56:24.222017       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0111 07:56:24.222042       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0111 07:56:24.222310       1 server.go:846] "Version info" version="v1.28.0"
	I0111 07:56:24.222324       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:56:24.222986       1 config.go:188] "Starting service config controller"
	I0111 07:56:24.223019       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0111 07:56:24.223048       1 config.go:97] "Starting endpoint slice config controller"
	I0111 07:56:24.223053       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0111 07:56:24.223741       1 config.go:315] "Starting node config controller"
	I0111 07:56:24.223752       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0111 07:56:24.323206       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0111 07:56:24.323262       1 shared_informer.go:318] Caches are synced for service config
	I0111 07:56:24.324559       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b37bcc571dd919ec3022bc467c7ceed9afb4a1afe13b776fec681a4dafe0743c] <==
	W0111 07:56:07.726386       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0111 07:56:07.726448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0111 07:56:08.568756       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0111 07:56:08.568811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0111 07:56:08.647459       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0111 07:56:08.647501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0111 07:56:08.686138       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0111 07:56:08.686187       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0111 07:56:08.729950       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0111 07:56:08.730003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0111 07:56:08.737812       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0111 07:56:08.737923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0111 07:56:08.928402       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0111 07:56:08.928446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0111 07:56:08.961361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0111 07:56:08.961398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0111 07:56:08.962226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0111 07:56:08.962266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0111 07:56:08.971088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0111 07:56:08.971125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0111 07:56:08.985063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0111 07:56:08.985102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0111 07:56:09.112107       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0111 07:56:09.112141       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0111 07:56:11.418405       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 11 07:56:23 old-k8s-version-086314 kubelet[1403]: I0111 07:56:23.624252    1403 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 11 07:56:23 old-k8s-version-086314 kubelet[1403]: I0111 07:56:23.718920    1403 topology_manager.go:215] "Topology Admit Handler" podUID="49020e24-0f40-4ef2-93a3-d3d2938acea7" podNamespace="kube-system" podName="kube-proxy-dl9xq"
	Jan 11 07:56:23 old-k8s-version-086314 kubelet[1403]: I0111 07:56:23.726057    1403 topology_manager.go:215] "Topology Admit Handler" podUID="efc9328f-933c-4a16-a505-312eb58fdb57" podNamespace="kube-system" podName="kindnet-9sfd9"
	Jan 11 07:56:23 old-k8s-version-086314 kubelet[1403]: I0111 07:56:23.801046    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49020e24-0f40-4ef2-93a3-d3d2938acea7-lib-modules\") pod \"kube-proxy-dl9xq\" (UID: \"49020e24-0f40-4ef2-93a3-d3d2938acea7\") " pod="kube-system/kube-proxy-dl9xq"
	Jan 11 07:56:23 old-k8s-version-086314 kubelet[1403]: I0111 07:56:23.801112    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efc9328f-933c-4a16-a505-312eb58fdb57-lib-modules\") pod \"kindnet-9sfd9\" (UID: \"efc9328f-933c-4a16-a505-312eb58fdb57\") " pod="kube-system/kindnet-9sfd9"
	Jan 11 07:56:23 old-k8s-version-086314 kubelet[1403]: I0111 07:56:23.801147    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/49020e24-0f40-4ef2-93a3-d3d2938acea7-kube-proxy\") pod \"kube-proxy-dl9xq\" (UID: \"49020e24-0f40-4ef2-93a3-d3d2938acea7\") " pod="kube-system/kube-proxy-dl9xq"
	Jan 11 07:56:23 old-k8s-version-086314 kubelet[1403]: I0111 07:56:23.801175    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49020e24-0f40-4ef2-93a3-d3d2938acea7-xtables-lock\") pod \"kube-proxy-dl9xq\" (UID: \"49020e24-0f40-4ef2-93a3-d3d2938acea7\") " pod="kube-system/kube-proxy-dl9xq"
	Jan 11 07:56:23 old-k8s-version-086314 kubelet[1403]: I0111 07:56:23.801205    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efc9328f-933c-4a16-a505-312eb58fdb57-xtables-lock\") pod \"kindnet-9sfd9\" (UID: \"efc9328f-933c-4a16-a505-312eb58fdb57\") " pod="kube-system/kindnet-9sfd9"
	Jan 11 07:56:23 old-k8s-version-086314 kubelet[1403]: I0111 07:56:23.801239    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5vpj\" (UniqueName: \"kubernetes.io/projected/efc9328f-933c-4a16-a505-312eb58fdb57-kube-api-access-j5vpj\") pod \"kindnet-9sfd9\" (UID: \"efc9328f-933c-4a16-a505-312eb58fdb57\") " pod="kube-system/kindnet-9sfd9"
	Jan 11 07:56:23 old-k8s-version-086314 kubelet[1403]: I0111 07:56:23.801272    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psbpj\" (UniqueName: \"kubernetes.io/projected/49020e24-0f40-4ef2-93a3-d3d2938acea7-kube-api-access-psbpj\") pod \"kube-proxy-dl9xq\" (UID: \"49020e24-0f40-4ef2-93a3-d3d2938acea7\") " pod="kube-system/kube-proxy-dl9xq"
	Jan 11 07:56:23 old-k8s-version-086314 kubelet[1403]: I0111 07:56:23.801301    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/efc9328f-933c-4a16-a505-312eb58fdb57-cni-cfg\") pod \"kindnet-9sfd9\" (UID: \"efc9328f-933c-4a16-a505-312eb58fdb57\") " pod="kube-system/kindnet-9sfd9"
	Jan 11 07:56:24 old-k8s-version-086314 kubelet[1403]: I0111 07:56:24.832526    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dl9xq" podStartSLOduration=1.832471317 podCreationTimestamp="2026-01-11 07:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:56:24.819394651 +0000 UTC m=+14.217726732" watchObservedRunningTime="2026-01-11 07:56:24.832471317 +0000 UTC m=+14.230803456"
	Jan 11 07:56:25 old-k8s-version-086314 kubelet[1403]: I0111 07:56:25.820412    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9sfd9" podStartSLOduration=1.194624846 podCreationTimestamp="2026-01-11 07:56:23 +0000 UTC" firstStartedPulling="2026-01-11 07:56:24.043707319 +0000 UTC m=+13.442039387" lastFinishedPulling="2026-01-11 07:56:25.669445942 +0000 UTC m=+15.067778018" observedRunningTime="2026-01-11 07:56:25.820162366 +0000 UTC m=+15.218494444" watchObservedRunningTime="2026-01-11 07:56:25.820363477 +0000 UTC m=+15.218695554"
	Jan 11 07:56:36 old-k8s-version-086314 kubelet[1403]: I0111 07:56:36.546975    1403 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 11 07:56:36 old-k8s-version-086314 kubelet[1403]: I0111 07:56:36.570152    1403 topology_manager.go:215] "Topology Admit Handler" podUID="56946fe8-d095-4fe2-8742-0dc984828edf" podNamespace="kube-system" podName="storage-provisioner"
	Jan 11 07:56:36 old-k8s-version-086314 kubelet[1403]: I0111 07:56:36.572503    1403 topology_manager.go:215] "Topology Admit Handler" podUID="ac6485e5-f57c-40c4-aa87-a724ccfd7fd3" podNamespace="kube-system" podName="coredns-5dd5756b68-797jn"
	Jan 11 07:56:36 old-k8s-version-086314 kubelet[1403]: I0111 07:56:36.597642    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5vnk\" (UniqueName: \"kubernetes.io/projected/ac6485e5-f57c-40c4-aa87-a724ccfd7fd3-kube-api-access-s5vnk\") pod \"coredns-5dd5756b68-797jn\" (UID: \"ac6485e5-f57c-40c4-aa87-a724ccfd7fd3\") " pod="kube-system/coredns-5dd5756b68-797jn"
	Jan 11 07:56:36 old-k8s-version-086314 kubelet[1403]: I0111 07:56:36.597698    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9nmm\" (UniqueName: \"kubernetes.io/projected/56946fe8-d095-4fe2-8742-0dc984828edf-kube-api-access-x9nmm\") pod \"storage-provisioner\" (UID: \"56946fe8-d095-4fe2-8742-0dc984828edf\") " pod="kube-system/storage-provisioner"
	Jan 11 07:56:36 old-k8s-version-086314 kubelet[1403]: I0111 07:56:36.597734    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac6485e5-f57c-40c4-aa87-a724ccfd7fd3-config-volume\") pod \"coredns-5dd5756b68-797jn\" (UID: \"ac6485e5-f57c-40c4-aa87-a724ccfd7fd3\") " pod="kube-system/coredns-5dd5756b68-797jn"
	Jan 11 07:56:36 old-k8s-version-086314 kubelet[1403]: I0111 07:56:36.597851    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/56946fe8-d095-4fe2-8742-0dc984828edf-tmp\") pod \"storage-provisioner\" (UID: \"56946fe8-d095-4fe2-8742-0dc984828edf\") " pod="kube-system/storage-provisioner"
	Jan 11 07:56:37 old-k8s-version-086314 kubelet[1403]: I0111 07:56:37.844302    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.844250054 podCreationTimestamp="2026-01-11 07:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:56:37.844067154 +0000 UTC m=+27.242399233" watchObservedRunningTime="2026-01-11 07:56:37.844250054 +0000 UTC m=+27.242582132"
	Jan 11 07:56:37 old-k8s-version-086314 kubelet[1403]: I0111 07:56:37.854190    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-797jn" podStartSLOduration=14.854145484 podCreationTimestamp="2026-01-11 07:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:56:37.853787743 +0000 UTC m=+27.252119824" watchObservedRunningTime="2026-01-11 07:56:37.854145484 +0000 UTC m=+27.252477571"
	Jan 11 07:56:40 old-k8s-version-086314 kubelet[1403]: I0111 07:56:40.206019    1403 topology_manager.go:215] "Topology Admit Handler" podUID="e83aa813-f5e9-4248-9355-54c56b9cd30d" podNamespace="default" podName="busybox"
	Jan 11 07:56:40 old-k8s-version-086314 kubelet[1403]: I0111 07:56:40.318487    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-955st\" (UniqueName: \"kubernetes.io/projected/e83aa813-f5e9-4248-9355-54c56b9cd30d-kube-api-access-955st\") pod \"busybox\" (UID: \"e83aa813-f5e9-4248-9355-54c56b9cd30d\") " pod="default/busybox"
	Jan 11 07:56:41 old-k8s-version-086314 kubelet[1403]: I0111 07:56:41.857874    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.659016686 podCreationTimestamp="2026-01-11 07:56:40 +0000 UTC" firstStartedPulling="2026-01-11 07:56:40.532195081 +0000 UTC m=+29.930527142" lastFinishedPulling="2026-01-11 07:56:41.730994991 +0000 UTC m=+31.129327058" observedRunningTime="2026-01-11 07:56:41.857631132 +0000 UTC m=+31.255963209" watchObservedRunningTime="2026-01-11 07:56:41.857816602 +0000 UTC m=+31.256148675"
	
	
	==> storage-provisioner [d82b9ab625de5b1d7212c8c03b443a143f51b00610b151191e2be7b8e3d8ce36] <==
	I0111 07:56:36.945758       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:56:36.961032       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:56:36.961101       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0111 07:56:36.969980       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:56:36.970108       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5462a82-2278-4014-bc22-31381ebd17ec", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-086314_845294e4-b5ca-4250-b489-a82931afd96d became leader
	I0111 07:56:36.970267       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-086314_845294e4-b5ca-4250-b489-a82931afd96d!
	I0111 07:56:37.070682       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-086314_845294e4-b5ca-4250-b489-a82931afd96d!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-086314 -n old-k8s-version-086314
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-086314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (274.025038ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:56:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-875594 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-875594 describe deploy/metrics-server -n kube-system: exit status 1 (76.688621ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-875594 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-875594
helpers_test.go:244: (dbg) docker inspect no-preload-875594:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676",
	        "Created": "2026-01-11T07:55:58.906575033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288558,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:55:58.948475192Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/hostname",
	        "HostsPath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/hosts",
	        "LogPath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676-json.log",
	        "Name": "/no-preload-875594",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-875594:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-875594",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676",
	                "LowerDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-875594",
	                "Source": "/var/lib/docker/volumes/no-preload-875594/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-875594",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-875594",
	                "name.minikube.sigs.k8s.io": "no-preload-875594",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0218a5c1221d57c4d5c5303c75f8424889e2f8e0d542cb33d84130cdfff1fa21",
	            "SandboxKey": "/var/run/docker/netns/0218a5c1221d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-875594": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e8428f34acce91e05870737f0d974d96eb8dd52ccd21658be3061a5635b849ca",
	                    "EndpointID": "634e2ea28169124c8dc50aa1af29410390f636616177f5ea2001a9a89598889a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "3a:92:8d:0c:62:6d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-875594",
	                        "7deefeed7d70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-875594 -n no-preload-875594
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-875594 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-875594 logs -n 25: (1.047289458s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-181803 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo docker system info                                                                                                                                 │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-086314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cri-dockerd --version                                                                                                                              │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ stop    │ -p old-k8s-version-086314 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo containerd config dump                                                                                                                             │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo crio config                                                                                                                                        │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p bridge-181803                                                                                                                                                         │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p disable-driver-mounts-768607                                                                                                                                          │ disable-driver-mounts-768607 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:56:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:56:55.556161  306327 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:56:55.556293  306327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:55.556304  306327 out.go:374] Setting ErrFile to fd 2...
	I0111 07:56:55.556310  306327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:55.556520  306327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:56:55.557017  306327 out.go:368] Setting JSON to false
	I0111 07:56:55.558161  306327 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2364,"bootTime":1768115852,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:56:55.558214  306327 start.go:143] virtualization: kvm guest
	I0111 07:56:55.560314  306327 out.go:179] * [default-k8s-diff-port-585688] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:56:55.561557  306327 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:56:55.561547  306327 notify.go:221] Checking for updates...
	I0111 07:56:55.564071  306327 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:56:55.565238  306327 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:56:55.566238  306327 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:56:55.567307  306327 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:56:55.568659  306327 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:56:55.570240  306327 config.go:182] Loaded profile config "embed-certs-285966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:56:55.570360  306327 config.go:182] Loaded profile config "no-preload-875594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:56:55.570450  306327 config.go:182] Loaded profile config "old-k8s-version-086314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 07:56:55.570530  306327 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:56:55.594387  306327 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:56:55.594535  306327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:56:55.648226  306327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-11 07:56:55.638385346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:56:55.648333  306327 docker.go:319] overlay module found
	I0111 07:56:55.650050  306327 out.go:179] * Using the docker driver based on user configuration
	I0111 07:56:55.651233  306327 start.go:309] selected driver: docker
	I0111 07:56:55.651252  306327 start.go:928] validating driver "docker" against <nil>
	I0111 07:56:55.651263  306327 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:56:55.651786  306327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:56:55.706677  306327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-11 07:56:55.695165644 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:56:55.706837  306327 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:56:55.707067  306327 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:56:55.708882  306327 out.go:179] * Using Docker driver with root privileges
	I0111 07:56:55.710098  306327 cni.go:84] Creating CNI manager for ""
	I0111 07:56:55.710166  306327 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:56:55.710177  306327 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 07:56:55.710259  306327 start.go:353] cluster config:
	{Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:56:55.711540  306327 out.go:179] * Starting "default-k8s-diff-port-585688" primary control-plane node in "default-k8s-diff-port-585688" cluster
	I0111 07:56:55.712644  306327 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:56:55.713878  306327 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:56:55.715052  306327 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:56:55.715084  306327 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:56:55.715091  306327 cache.go:65] Caching tarball of preloaded images
	I0111 07:56:55.715155  306327 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:56:55.715181  306327 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:56:55.715193  306327 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:56:55.715369  306327 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/config.json ...
	I0111 07:56:55.715415  306327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/config.json: {Name:mkbc339e7cdf379eff2068dfb6d816e133305c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:55.736316  306327 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:56:55.736342  306327 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:56:55.736357  306327 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:56:55.736397  306327 start.go:360] acquireMachinesLock for default-k8s-diff-port-585688: {Name:mk2e6a9de762d1bbe8860b89830bc1ce9534f022 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:56:55.736498  306327 start.go:364] duration metric: took 79.092µs to acquireMachinesLock for "default-k8s-diff-port-585688"
	I0111 07:56:55.736535  306327 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:56:55.736632  306327 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Jan 11 07:56:45 no-preload-875594 crio[766]: time="2026-01-11T07:56:45.794479949Z" level=info msg="Starting container: a138d750f1aaa2c9b1ac5e9a194c75f506856d4fa30b5f0a98f82867b0c35f05" id=edd18b3f-677e-417a-b488-ef24fa95128f name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:56:45 no-preload-875594 crio[766]: time="2026-01-11T07:56:45.796517385Z" level=info msg="Started container" PID=2773 containerID=a138d750f1aaa2c9b1ac5e9a194c75f506856d4fa30b5f0a98f82867b0c35f05 description=kube-system/coredns-7d764666f9-gw9b6/coredns id=edd18b3f-677e-417a-b488-ef24fa95128f name=/runtime.v1.RuntimeService/StartContainer sandboxID=8767669fe91f0119e623299afe54cb335c4b07f25b6f8adb65bd505a2ff31761
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.289766542Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2071ccbd-fb6a-4d64-afc4-d5955e4884f5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.289878451Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.296261522Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ca8f661be836e6a577367dc058bbbf6311160b974d428ed6f59116478d9a86b9 UID:41b852b9-8aa8-498c-ae93-7ae80b982c8d NetNS:/var/run/netns/58f2f7ec-de7b-468e-af64-82bf69265216 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004a3338}] Aliases:map[]}"
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.296299017Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.319528064Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ca8f661be836e6a577367dc058bbbf6311160b974d428ed6f59116478d9a86b9 UID:41b852b9-8aa8-498c-ae93-7ae80b982c8d NetNS:/var/run/netns/58f2f7ec-de7b-468e-af64-82bf69265216 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004a3338}] Aliases:map[]}"
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.319725569Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.320890602Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.322148194Z" level=info msg="Ran pod sandbox ca8f661be836e6a577367dc058bbbf6311160b974d428ed6f59116478d9a86b9 with infra container: default/busybox/POD" id=2071ccbd-fb6a-4d64-afc4-d5955e4884f5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.323658556Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=68d979ec-9877-4d38-a60d-8fb7c243b8e4 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.323795966Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=68d979ec-9877-4d38-a60d-8fb7c243b8e4 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.3239361Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=68d979ec-9877-4d38-a60d-8fb7c243b8e4 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.324927926Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=654e43d8-db56-49e5-9b99-b56e0156ed89 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:56:49 no-preload-875594 crio[766]: time="2026-01-11T07:56:49.325347124Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 11 07:56:50 no-preload-875594 crio[766]: time="2026-01-11T07:56:50.502771342Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=654e43d8-db56-49e5-9b99-b56e0156ed89 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:56:50 no-preload-875594 crio[766]: time="2026-01-11T07:56:50.503482757Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3f3c40e0-7694-483f-9bb6-0483f092e46e name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:50 no-preload-875594 crio[766]: time="2026-01-11T07:56:50.505457Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0383c933-0f56-4778-aa9c-44a7564d5412 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:50 no-preload-875594 crio[766]: time="2026-01-11T07:56:50.511628949Z" level=info msg="Creating container: default/busybox/busybox" id=5e3db3a1-e4ce-4b23-ae51-4cba64991083 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:56:50 no-preload-875594 crio[766]: time="2026-01-11T07:56:50.511786786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:50 no-preload-875594 crio[766]: time="2026-01-11T07:56:50.516026575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:50 no-preload-875594 crio[766]: time="2026-01-11T07:56:50.516621568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:50 no-preload-875594 crio[766]: time="2026-01-11T07:56:50.545933144Z" level=info msg="Created container b25ea979808a7c02d9ccdee67bf3955b41416baf362d31c65f61148255e5857a: default/busybox/busybox" id=5e3db3a1-e4ce-4b23-ae51-4cba64991083 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:56:50 no-preload-875594 crio[766]: time="2026-01-11T07:56:50.546641292Z" level=info msg="Starting container: b25ea979808a7c02d9ccdee67bf3955b41416baf362d31c65f61148255e5857a" id=6ab6a523-424f-4499-a92f-dbc127ab1bc4 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:56:50 no-preload-875594 crio[766]: time="2026-01-11T07:56:50.548922319Z" level=info msg="Started container" PID=2849 containerID=b25ea979808a7c02d9ccdee67bf3955b41416baf362d31c65f61148255e5857a description=default/busybox/busybox id=6ab6a523-424f-4499-a92f-dbc127ab1bc4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca8f661be836e6a577367dc058bbbf6311160b974d428ed6f59116478d9a86b9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b25ea979808a7       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   ca8f661be836e       busybox                                     default
	a138d750f1aaa       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   8767669fe91f0       coredns-7d764666f9-gw9b6                    kube-system
	c85734dc3a6db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   8a885e60e88e8       storage-provisioner                         kube-system
	c13ebbcc51b64       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   3a4c55dd37a25       kindnet-p7w57                               kube-system
	c5f0e1bfdcbf5       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      24 seconds ago      Running             kube-proxy                0                   1f15cee855ac6       kube-proxy-49b4x                            kube-system
	bfa4d6eb961cc       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      34 seconds ago      Running             kube-apiserver            0                   be2bf1efe544f       kube-apiserver-no-preload-875594            kube-system
	9ad4a2b18e18e       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      34 seconds ago      Running             kube-controller-manager   0                   a110554a447d5       kube-controller-manager-no-preload-875594   kube-system
	171a29004be42       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      34 seconds ago      Running             kube-scheduler            0                   b3f18f1640100       kube-scheduler-no-preload-875594            kube-system
	5f325ffab3408       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      34 seconds ago      Running             etcd                      0                   88ed2add0e496       etcd-no-preload-875594                      kube-system
	
	
	==> coredns [a138d750f1aaa2c9b1ac5e9a194c75f506856d4fa30b5f0a98f82867b0c35f05] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55526 - 35345 "HINFO IN 1239199298773107862.7793444566468515718. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017426491s
	
	
	==> describe nodes <==
	Name:               no-preload-875594
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-875594
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=no-preload-875594
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_56_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-875594
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:56:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:56:45 +0000   Sun, 11 Jan 2026 07:56:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:56:45 +0000   Sun, 11 Jan 2026 07:56:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:56:45 +0000   Sun, 11 Jan 2026 07:56:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:56:45 +0000   Sun, 11 Jan 2026 07:56:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-875594
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                47c67b25-2f7f-4c80-9308-fd505f15ded0
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-gw9b6                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-no-preload-875594                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-p7w57                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-no-preload-875594             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-875594    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-49b4x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-no-preload-875594             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node no-preload-875594 event: Registered Node no-preload-875594 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [5f325ffab3408a80d2d2a0dd51d67bf90ff1a5db3032fda86f4cbcb8b57a953c] <==
	{"level":"info","ts":"2026-01-11T07:56:23.647510Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-11T07:56:23.647574Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-11T07:56:23.647588Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:56:23.647602Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:23.648146Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:23.648175Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:56:23.648190Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:23.648198Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:23.648874Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-875594 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:56:23.648943Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:56:23.649039Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:56:23.649078Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:56:23.649103Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:56:23.649191Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:56:23.649817Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:56:23.649911Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:56:23.649940Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:56:23.649955Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:56:23.649995Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T07:56:23.650092Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T07:56:23.650645Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:56:23.654922Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-11T07:56:23.655342Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:56:26.411913Z","caller":"traceutil/trace.go:172","msg":"trace[379624998] transaction","detail":"{read_only:false; response_revision:136; number_of_response:1; }","duration":"134.723965ms","start":"2026-01-11T07:56:26.277151Z","end":"2026-01-11T07:56:26.411875Z","steps":["trace[379624998] 'process raft request'  (duration: 126.606717ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-11T07:56:32.789097Z","caller":"traceutil/trace.go:172","msg":"trace[750540832] transaction","detail":"{read_only:false; response_revision:354; number_of_response:1; }","duration":"106.768819ms","start":"2026-01-11T07:56:32.682301Z","end":"2026-01-11T07:56:32.789070Z","steps":["trace[750540832] 'process raft request'  (duration: 53.314635ms)","trace[750540832] 'compare'  (duration: 53.22314ms)"],"step_count":2}
	
	
	==> kernel <==
	 07:56:57 up 39 min,  0 user,  load average: 3.94, 3.44, 2.41
	Linux no-preload-875594 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c13ebbcc51b64201633f626da29f2e3cc08d1bfbd8270446bdd1f9a3a80c14f6] <==
	I0111 07:56:35.029308       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:56:35.029595       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 07:56:35.029780       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:56:35.029823       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:56:35.029854       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:56:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:56:35.231405       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:56:35.231449       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:56:35.231460       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:56:35.231584       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:56:35.434294       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:56:35.434325       1 metrics.go:72] Registering metrics
	I0111 07:56:35.434376       1 controller.go:711] "Syncing nftables rules"
	I0111 07:56:45.232051       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:56:45.232119       1 main.go:301] handling current node
	I0111 07:56:55.234877       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:56:55.234947       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bfa4d6eb961cc3b74d2006e51c4d9200970748fc5be60b3527a773c2d1b2a14d] <==
	I0111 07:56:25.144193       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:25.145192       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:56:25.148477       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:56:25.150166       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 07:56:25.150435       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 07:56:25.151671       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 07:56:25.345364       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:56:26.054709       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0111 07:56:26.058732       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0111 07:56:26.058750       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:56:26.843148       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:56:26.890369       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:56:26.949780       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 07:56:26.955627       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0111 07:56:26.956722       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:56:26.961206       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:56:27.066319       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:56:27.824495       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:56:27.839023       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 07:56:27.847390       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0111 07:56:32.570753       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:56:32.575731       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:56:32.795340       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:56:33.070096       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0111 07:56:56.084979       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:51684: use of closed network connection
	
	
	==> kube-controller-manager [9ad4a2b18e18e0da5aac4f93478dab3ee39d3c3d31220d4d1988cada2354e69a] <==
	I0111 07:56:31.874696       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0111 07:56:31.875067       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.875308       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.875356       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.875403       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.875651       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.876559       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.876654       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.876730       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.876749       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.876828       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.876896       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.873932       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.876961       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.877227       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.877622       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.876946       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.880109       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:56:31.882848       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-875594" podCIDRs=["10.244.0.0/24"]
	I0111 07:56:31.883997       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.974121       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:31.974164       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:56:31.974170       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:56:31.980607       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:46.877976       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [c5f0e1bfdcbf5d13446d58c120a0a44e874dc743c6be105cd97ae474b04be0d7] <==
	I0111 07:56:33.496571       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:56:33.571927       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:56:33.672162       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:33.672206       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 07:56:33.672295       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:56:33.693322       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:56:33.693387       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:56:33.699216       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:56:33.699749       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:56:33.699769       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:56:33.701643       1 config.go:200] "Starting service config controller"
	I0111 07:56:33.701681       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:56:33.701684       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:56:33.701721       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:56:33.701839       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:56:33.701891       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:56:33.702442       1 config.go:309] "Starting node config controller"
	I0111 07:56:33.702506       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:56:33.702515       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:56:33.802009       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:56:33.802078       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:56:33.803466       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [171a29004be421aa81859662b524b02871abba5897f71e4d6be00b7c6fbf96d9] <==
	E0111 07:56:25.118147       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 07:56:25.118371       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 07:56:25.118421       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 07:56:25.118499       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 07:56:25.118610       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 07:56:25.962041       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 07:56:26.079466       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 07:56:26.108083       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 07:56:26.114972       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 07:56:26.124230       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 07:56:26.141415       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 07:56:26.173563       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 07:56:26.198397       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 07:56:26.232854       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 07:56:26.273246       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 07:56:26.332239       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 07:56:26.396771       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 07:56:26.410835       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0111 07:56:26.451070       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 07:56:26.462513       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 07:56:26.579780       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 07:56:26.583220       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 07:56:26.587861       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 07:56:26.633879       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	I0111 07:56:28.908640       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:56:33 no-preload-875594 kubelet[2171]: I0111 07:56:33.170632    2171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkrjs\" (UniqueName: \"kubernetes.io/projected/f7547d3e-e98d-42af-a0cc-557386ef7efa-kube-api-access-mkrjs\") pod \"kindnet-p7w57\" (UID: \"f7547d3e-e98d-42af-a0cc-557386ef7efa\") " pod="kube-system/kindnet-p7w57"
	Jan 11 07:56:33 no-preload-875594 kubelet[2171]: I0111 07:56:33.170667    2171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7547d3e-e98d-42af-a0cc-557386ef7efa-xtables-lock\") pod \"kindnet-p7w57\" (UID: \"f7547d3e-e98d-42af-a0cc-557386ef7efa\") " pod="kube-system/kindnet-p7w57"
	Jan 11 07:56:33 no-preload-875594 kubelet[2171]: I0111 07:56:33.170693    2171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7547d3e-e98d-42af-a0cc-557386ef7efa-lib-modules\") pod \"kindnet-p7w57\" (UID: \"f7547d3e-e98d-42af-a0cc-557386ef7efa\") " pod="kube-system/kindnet-p7w57"
	Jan 11 07:56:33 no-preload-875594 kubelet[2171]: I0111 07:56:33.170715    2171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9fb5a8d-61b1-4674-aafc-a24c6c468eee-kube-proxy\") pod \"kube-proxy-49b4x\" (UID: \"c9fb5a8d-61b1-4674-aafc-a24c6c468eee\") " pod="kube-system/kube-proxy-49b4x"
	Jan 11 07:56:33 no-preload-875594 kubelet[2171]: I0111 07:56:33.170734    2171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9fb5a8d-61b1-4674-aafc-a24c6c468eee-lib-modules\") pod \"kube-proxy-49b4x\" (UID: \"c9fb5a8d-61b1-4674-aafc-a24c6c468eee\") " pod="kube-system/kube-proxy-49b4x"
	Jan 11 07:56:33 no-preload-875594 kubelet[2171]: I0111 07:56:33.170768    2171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9xsq\" (UniqueName: \"kubernetes.io/projected/c9fb5a8d-61b1-4674-aafc-a24c6c468eee-kube-api-access-t9xsq\") pod \"kube-proxy-49b4x\" (UID: \"c9fb5a8d-61b1-4674-aafc-a24c6c468eee\") " pod="kube-system/kube-proxy-49b4x"
	Jan 11 07:56:33 no-preload-875594 kubelet[2171]: I0111 07:56:33.827967    2171 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-49b4x" podStartSLOduration=0.827947392 podStartE2EDuration="827.947392ms" podCreationTimestamp="2026-01-11 07:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:56:33.827622597 +0000 UTC m=+6.185075486" watchObservedRunningTime="2026-01-11 07:56:33.827947392 +0000 UTC m=+6.185400282"
	Jan 11 07:56:37 no-preload-875594 kubelet[2171]: E0111 07:56:37.132502    2171 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-875594" containerName="kube-apiserver"
	Jan 11 07:56:37 no-preload-875594 kubelet[2171]: I0111 07:56:37.148352    2171 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-p7w57" podStartSLOduration=2.824022388 podStartE2EDuration="4.148331053s" podCreationTimestamp="2026-01-11 07:56:33 +0000 UTC" firstStartedPulling="2026-01-11 07:56:33.408230954 +0000 UTC m=+5.765683835" lastFinishedPulling="2026-01-11 07:56:34.73253962 +0000 UTC m=+7.089992500" observedRunningTime="2026-01-11 07:56:34.833947674 +0000 UTC m=+7.191400565" watchObservedRunningTime="2026-01-11 07:56:37.148331053 +0000 UTC m=+9.505783937"
	Jan 11 07:56:38 no-preload-875594 kubelet[2171]: E0111 07:56:38.371027    2171 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-875594" containerName="kube-controller-manager"
	Jan 11 07:56:38 no-preload-875594 kubelet[2171]: E0111 07:56:38.561753    2171 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-875594" containerName="etcd"
	Jan 11 07:56:40 no-preload-875594 kubelet[2171]: E0111 07:56:40.187599    2171 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-875594" containerName="kube-scheduler"
	Jan 11 07:56:45 no-preload-875594 kubelet[2171]: I0111 07:56:45.395580    2171 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 11 07:56:45 no-preload-875594 kubelet[2171]: I0111 07:56:45.465633    2171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fab33c8a-a60a-4148-9d33-607b857a3867-config-volume\") pod \"coredns-7d764666f9-gw9b6\" (UID: \"fab33c8a-a60a-4148-9d33-607b857a3867\") " pod="kube-system/coredns-7d764666f9-gw9b6"
	Jan 11 07:56:45 no-preload-875594 kubelet[2171]: I0111 07:56:45.465688    2171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llljw\" (UniqueName: \"kubernetes.io/projected/fab33c8a-a60a-4148-9d33-607b857a3867-kube-api-access-llljw\") pod \"coredns-7d764666f9-gw9b6\" (UID: \"fab33c8a-a60a-4148-9d33-607b857a3867\") " pod="kube-system/coredns-7d764666f9-gw9b6"
	Jan 11 07:56:45 no-preload-875594 kubelet[2171]: I0111 07:56:45.465721    2171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5e5ad34f-7c3c-4f85-8280-d79b3586001e-tmp\") pod \"storage-provisioner\" (UID: \"5e5ad34f-7c3c-4f85-8280-d79b3586001e\") " pod="kube-system/storage-provisioner"
	Jan 11 07:56:45 no-preload-875594 kubelet[2171]: I0111 07:56:45.465794    2171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8phw\" (UniqueName: \"kubernetes.io/projected/5e5ad34f-7c3c-4f85-8280-d79b3586001e-kube-api-access-z8phw\") pod \"storage-provisioner\" (UID: \"5e5ad34f-7c3c-4f85-8280-d79b3586001e\") " pod="kube-system/storage-provisioner"
	Jan 11 07:56:45 no-preload-875594 kubelet[2171]: E0111 07:56:45.844100    2171 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gw9b6" containerName="coredns"
	Jan 11 07:56:45 no-preload-875594 kubelet[2171]: I0111 07:56:45.865175    2171 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-gw9b6" podStartSLOduration=12.865155056 podStartE2EDuration="12.865155056s" podCreationTimestamp="2026-01-11 07:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:56:45.864911213 +0000 UTC m=+18.222364102" watchObservedRunningTime="2026-01-11 07:56:45.865155056 +0000 UTC m=+18.222607950"
	Jan 11 07:56:45 no-preload-875594 kubelet[2171]: I0111 07:56:45.865376    2171 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.865365442 podStartE2EDuration="12.865365442s" podCreationTimestamp="2026-01-11 07:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:56:45.85256886 +0000 UTC m=+18.210021748" watchObservedRunningTime="2026-01-11 07:56:45.865365442 +0000 UTC m=+18.222818330"
	Jan 11 07:56:46 no-preload-875594 kubelet[2171]: E0111 07:56:46.846377    2171 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gw9b6" containerName="coredns"
	Jan 11 07:56:47 no-preload-875594 kubelet[2171]: E0111 07:56:47.138774    2171 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-875594" containerName="kube-apiserver"
	Jan 11 07:56:47 no-preload-875594 kubelet[2171]: E0111 07:56:47.849045    2171 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gw9b6" containerName="coredns"
	Jan 11 07:56:49 no-preload-875594 kubelet[2171]: I0111 07:56:49.092157    2171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8kwp\" (UniqueName: \"kubernetes.io/projected/41b852b9-8aa8-498c-ae93-7ae80b982c8d-kube-api-access-n8kwp\") pod \"busybox\" (UID: \"41b852b9-8aa8-498c-ae93-7ae80b982c8d\") " pod="default/busybox"
	Jan 11 07:56:50 no-preload-875594 kubelet[2171]: I0111 07:56:50.869840    2171 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.689678222 podStartE2EDuration="2.869792674s" podCreationTimestamp="2026-01-11 07:56:48 +0000 UTC" firstStartedPulling="2026-01-11 07:56:49.324400006 +0000 UTC m=+21.681852895" lastFinishedPulling="2026-01-11 07:56:50.504514461 +0000 UTC m=+22.861967347" observedRunningTime="2026-01-11 07:56:50.8696728 +0000 UTC m=+23.227125689" watchObservedRunningTime="2026-01-11 07:56:50.869792674 +0000 UTC m=+23.227245567"
	
	
	==> storage-provisioner [c85734dc3a6db35115d71f53771473815b74b08b5d12e208711135150b4cd3d6] <==
	I0111 07:56:45.804227       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:56:45.813967       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:56:45.814036       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 07:56:45.816538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:45.822004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:56:45.822177       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:56:45.822364       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-875594_b0f54d11-f670-42d8-84f7-cd1b1d0a8369!
	I0111 07:56:45.822346       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"adcc6b4b-1fe9-448b-b2d9-cf067e532a3b", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-875594_b0f54d11-f670-42d8-84f7-cd1b1d0a8369 became leader
	W0111 07:56:45.826386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:45.832228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:56:45.923211       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-875594_b0f54d11-f670-42d8-84f7-cd1b1d0a8369!
	W0111 07:56:47.835368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:47.841838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:49.845917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:49.850641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:51.854628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:51.860256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:53.863740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:53.867724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:55.870884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:55.875251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:57.878825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-875594 -n no-preload-875594
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-875594 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-285966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-285966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.194172ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:57:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-285966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-285966 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-285966 describe deploy/metrics-server -n kube-system: exit status 1 (56.204735ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-285966 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-285966
helpers_test.go:244: (dbg) docker inspect embed-certs-285966:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0",
	        "Created": "2026-01-11T07:56:18.369460444Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:56:18.558764891Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/hosts",
	        "LogPath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0-json.log",
	        "Name": "/embed-certs-285966",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-285966:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-285966",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0",
	                "LowerDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-285966",
	                "Source": "/var/lib/docker/volumes/embed-certs-285966/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-285966",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-285966",
	                "name.minikube.sigs.k8s.io": "embed-certs-285966",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8b180dd7b354da0aded526bc16be4514295bc887854b72150b69b3b52dc78736",
	            "SandboxKey": "/var/run/docker/netns/8b180dd7b354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-285966": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "381973c00b1f29aaeef3a0b61a9772036cd7ea9cc64cf06cd8af7e4ef146b220",
	                    "EndpointID": "c5477ae0988cdcf90fc86866f6cffecb7cf14d04e2288316692d027eb132ee91",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6e:4f:e5:86:a1:05",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-285966",
	                        "97da5d389969"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-285966 -n embed-certs-285966
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-285966 logs -n 25
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-181803 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo docker system info                                                                                                                                 │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-086314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cri-dockerd --version                                                                                                                              │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ stop    │ -p old-k8s-version-086314 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo containerd config dump                                                                                                                             │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo crio config                                                                                                                                        │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p bridge-181803                                                                                                                                                         │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p disable-driver-mounts-768607                                                                                                                                          │ disable-driver-mounts-768607 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ stop    │ -p no-preload-875594 --alsologtostderr -v=3                                                                                                                              │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-285966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:56:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:56:55.556161  306327 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:56:55.556293  306327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:55.556304  306327 out.go:374] Setting ErrFile to fd 2...
	I0111 07:56:55.556310  306327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:55.556520  306327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:56:55.557017  306327 out.go:368] Setting JSON to false
	I0111 07:56:55.558161  306327 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2364,"bootTime":1768115852,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:56:55.558214  306327 start.go:143] virtualization: kvm guest
	I0111 07:56:55.560314  306327 out.go:179] * [default-k8s-diff-port-585688] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:56:55.561557  306327 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:56:55.561547  306327 notify.go:221] Checking for updates...
	I0111 07:56:55.564071  306327 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:56:55.565238  306327 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:56:55.566238  306327 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:56:55.567307  306327 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:56:55.568659  306327 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:56:55.570240  306327 config.go:182] Loaded profile config "embed-certs-285966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:56:55.570360  306327 config.go:182] Loaded profile config "no-preload-875594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:56:55.570450  306327 config.go:182] Loaded profile config "old-k8s-version-086314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 07:56:55.570530  306327 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:56:55.594387  306327 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:56:55.594535  306327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:56:55.648226  306327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-11 07:56:55.638385346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:56:55.648333  306327 docker.go:319] overlay module found
	I0111 07:56:55.650050  306327 out.go:179] * Using the docker driver based on user configuration
	I0111 07:56:55.651233  306327 start.go:309] selected driver: docker
	I0111 07:56:55.651252  306327 start.go:928] validating driver "docker" against <nil>
	I0111 07:56:55.651263  306327 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:56:55.651786  306327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:56:55.706677  306327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-11 07:56:55.695165644 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:56:55.706837  306327 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:56:55.707067  306327 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:56:55.708882  306327 out.go:179] * Using Docker driver with root privileges
	I0111 07:56:55.710098  306327 cni.go:84] Creating CNI manager for ""
	I0111 07:56:55.710166  306327 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:56:55.710177  306327 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 07:56:55.710259  306327 start.go:353] cluster config:
	{Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:56:55.711540  306327 out.go:179] * Starting "default-k8s-diff-port-585688" primary control-plane node in "default-k8s-diff-port-585688" cluster
	I0111 07:56:55.712644  306327 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:56:55.713878  306327 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:56:55.715052  306327 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:56:55.715084  306327 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:56:55.715091  306327 cache.go:65] Caching tarball of preloaded images
	I0111 07:56:55.715155  306327 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:56:55.715181  306327 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:56:55.715193  306327 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:56:55.715369  306327 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/config.json ...
	I0111 07:56:55.715415  306327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/config.json: {Name:mkbc339e7cdf379eff2068dfb6d816e133305c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:56:55.736316  306327 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:56:55.736342  306327 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:56:55.736357  306327 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:56:55.736397  306327 start.go:360] acquireMachinesLock for default-k8s-diff-port-585688: {Name:mk2e6a9de762d1bbe8860b89830bc1ce9534f022 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:56:55.736498  306327 start.go:364] duration metric: took 79.092µs to acquireMachinesLock for "default-k8s-diff-port-585688"
	I0111 07:56:55.736535  306327 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:56:55.736632  306327 start.go:125] createHost starting for "" (driver="docker")
	I0111 07:56:55.738579  306327 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 07:56:55.738821  306327 start.go:159] libmachine.API.Create for "default-k8s-diff-port-585688" (driver="docker")
	I0111 07:56:55.738855  306327 client.go:173] LocalClient.Create starting
	I0111 07:56:55.738957  306327 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem
	I0111 07:56:55.739006  306327 main.go:144] libmachine: Decoding PEM data...
	I0111 07:56:55.739032  306327 main.go:144] libmachine: Parsing certificate...
	I0111 07:56:55.739099  306327 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem
	I0111 07:56:55.739134  306327 main.go:144] libmachine: Decoding PEM data...
	I0111 07:56:55.739159  306327 main.go:144] libmachine: Parsing certificate...
	I0111 07:56:55.739487  306327 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-585688 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 07:56:55.756287  306327 cli_runner.go:211] docker network inspect default-k8s-diff-port-585688 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 07:56:55.756347  306327 network_create.go:284] running [docker network inspect default-k8s-diff-port-585688] to gather additional debugging logs...
	I0111 07:56:55.756362  306327 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-585688
	W0111 07:56:55.772529  306327 cli_runner.go:211] docker network inspect default-k8s-diff-port-585688 returned with exit code 1
	I0111 07:56:55.772558  306327 network_create.go:287] error running [docker network inspect default-k8s-diff-port-585688]: docker network inspect default-k8s-diff-port-585688: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-585688 not found
	I0111 07:56:55.772571  306327 network_create.go:289] output of [docker network inspect default-k8s-diff-port-585688]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-585688 not found
	
	** /stderr **
	I0111 07:56:55.772697  306327 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:56:55.790342  306327 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-97182e010def IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7c:25:92:4b:a6} reservation:<nil>}
	I0111 07:56:55.791043  306327 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2c82d96e68c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:e5:3c:51:ff:2e} reservation:<nil>}
	I0111 07:56:55.791735  306327 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a487bdecf87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:a6:1b:9d:11:3d} reservation:<nil>}
	I0111 07:56:55.792313  306327 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e8428f34acce IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ee:dd:0c:57:0e:02} reservation:<nil>}
	I0111 07:56:55.793016  306327 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-381973c00b1f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:36:dd:86:21:bf:32} reservation:<nil>}
	I0111 07:56:55.793830  306327 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00205c4c0}
	I0111 07:56:55.793856  306327 network_create.go:124] attempt to create docker network default-k8s-diff-port-585688 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0111 07:56:55.793902  306327 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-585688 default-k8s-diff-port-585688
	I0111 07:56:55.845050  306327 network_create.go:108] docker network default-k8s-diff-port-585688 192.168.94.0/24 created
	I0111 07:56:55.845092  306327 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-585688" container
	I0111 07:56:55.845153  306327 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 07:56:55.863355  306327 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-585688 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-585688 --label created_by.minikube.sigs.k8s.io=true
	I0111 07:56:55.883077  306327 oci.go:103] Successfully created a docker volume default-k8s-diff-port-585688
	I0111 07:56:55.883166  306327 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-585688-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-585688 --entrypoint /usr/bin/test -v default-k8s-diff-port-585688:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 07:56:56.306794  306327 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-585688
	I0111 07:56:56.306900  306327 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:56:56.306913  306327 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 07:56:56.306984  306327 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-585688:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 07:57:00.369057  306327 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-585688:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.062012616s)
	I0111 07:57:00.369099  306327 kic.go:203] duration metric: took 4.062182289s to extract preloaded images to volume ...
	W0111 07:57:00.369191  306327 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0111 07:57:00.369230  306327 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0111 07:57:00.369278  306327 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 07:57:00.430884  306327 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-585688 --name default-k8s-diff-port-585688 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-585688 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-585688 --network default-k8s-diff-port-585688 --ip 192.168.94.2 --volume default-k8s-diff-port-585688:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	
	
	==> CRI-O <==
	Jan 11 07:56:48 embed-certs-285966 crio[779]: time="2026-01-11T07:56:48.77891253Z" level=info msg="Starting container: fbfb5b410da2f17e44a48edf081a593d7212e7240b01a58e0a02e61fb3356bc2" id=451bfa29-81e2-4b15-a7a8-23b909dee1bc name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:56:48 embed-certs-285966 crio[779]: time="2026-01-11T07:56:48.783006453Z" level=info msg="Started container" PID=1904 containerID=fbfb5b410da2f17e44a48edf081a593d7212e7240b01a58e0a02e61fb3356bc2 description=kube-system/coredns-7d764666f9-2ffx6/coredns id=451bfa29-81e2-4b15-a7a8-23b909dee1bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=4038dbd66a385e6c17caf5c5f8030aaaa53712fd0242635380247a2f2d80536f
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.422373853Z" level=info msg="Running pod sandbox: default/busybox/POD" id=48765a25-9430-4552-ab43-da21643068d2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.422444959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.427723384Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1c2a43c2fc50721457f29acae0bf31604ae959752045932e9ccd8a1a056e0369 UID:b919cf59-33aa-47fd-a15a-51037a983c6e NetNS:/var/run/netns/7884aab6-15cf-4a0b-a299-1d7726df9d59 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00007ecf0}] Aliases:map[]}"
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.427759255Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.446376384Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1c2a43c2fc50721457f29acae0bf31604ae959752045932e9ccd8a1a056e0369 UID:b919cf59-33aa-47fd-a15a-51037a983c6e NetNS:/var/run/netns/7884aab6-15cf-4a0b-a299-1d7726df9d59 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00007ecf0}] Aliases:map[]}"
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.446515709Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.447399121Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.448266906Z" level=info msg="Ran pod sandbox 1c2a43c2fc50721457f29acae0bf31604ae959752045932e9ccd8a1a056e0369 with infra container: default/busybox/POD" id=48765a25-9430-4552-ab43-da21643068d2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.449632896Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=16b885dd-7f1c-4788-8e4c-b46e0c8b1389 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.449762468Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=16b885dd-7f1c-4788-8e4c-b46e0c8b1389 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.449875511Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=16b885dd-7f1c-4788-8e4c-b46e0c8b1389 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.450737578Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4763dd55-23de-4a82-88fc-794919eca643 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:56:52 embed-certs-285966 crio[779]: time="2026-01-11T07:56:52.451095474Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 11 07:56:53 embed-certs-285966 crio[779]: time="2026-01-11T07:56:53.688515533Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=4763dd55-23de-4a82-88fc-794919eca643 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:56:53 embed-certs-285966 crio[779]: time="2026-01-11T07:56:53.689161418Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bfc1cd7c-525d-487c-9055-865b46555400 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:53 embed-certs-285966 crio[779]: time="2026-01-11T07:56:53.690793803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3450d94b-23dd-4b67-bf5a-1d45490e4db1 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:56:53 embed-certs-285966 crio[779]: time="2026-01-11T07:56:53.693980684Z" level=info msg="Creating container: default/busybox/busybox" id=1ee27c01-71cd-496b-b5ec-cc92962d9086 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:56:53 embed-certs-285966 crio[779]: time="2026-01-11T07:56:53.69412681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:53 embed-certs-285966 crio[779]: time="2026-01-11T07:56:53.697586306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:53 embed-certs-285966 crio[779]: time="2026-01-11T07:56:53.698030598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:56:53 embed-certs-285966 crio[779]: time="2026-01-11T07:56:53.723669217Z" level=info msg="Created container 5e30b0ebb42bf8640044c402f7e4aaf8365fc311a5027ea34a85e5bfac89ae7a: default/busybox/busybox" id=1ee27c01-71cd-496b-b5ec-cc92962d9086 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:56:53 embed-certs-285966 crio[779]: time="2026-01-11T07:56:53.724313573Z" level=info msg="Starting container: 5e30b0ebb42bf8640044c402f7e4aaf8365fc311a5027ea34a85e5bfac89ae7a" id=6587f678-1afd-4ed4-850b-592c91b95ebc name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:56:53 embed-certs-285966 crio[779]: time="2026-01-11T07:56:53.726586229Z" level=info msg="Started container" PID=1979 containerID=5e30b0ebb42bf8640044c402f7e4aaf8365fc311a5027ea34a85e5bfac89ae7a description=default/busybox/busybox id=6587f678-1afd-4ed4-850b-592c91b95ebc name=/runtime.v1.RuntimeService/StartContainer sandboxID=1c2a43c2fc50721457f29acae0bf31604ae959752045932e9ccd8a1a056e0369
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	5e30b0ebb42bf       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   1c2a43c2fc507       busybox                                      default
	fbfb5b410da2f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   4038dbd66a385       coredns-7d764666f9-2ffx6                     kube-system
	78d196f12d7c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   76ba990a4b41f       storage-provisioner                          kube-system
	5333ebf8a25e5       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   fda86a6fe4fb8       kindnet-jqj7z                                kube-system
	9275d68b119d1       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      25 seconds ago      Running             kube-proxy                0                   794411777d1ba       kube-proxy-2slrz                             kube-system
	af0e1f407a264       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      35 seconds ago      Running             kube-controller-manager   0                   6e3bd27d8995a       kube-controller-manager-embed-certs-285966   kube-system
	10bbe1fa3f128       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      35 seconds ago      Running             kube-apiserver            0                   af8d539cb68a3       kube-apiserver-embed-certs-285966            kube-system
	dba4b5a047b08       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      35 seconds ago      Running             kube-scheduler            0                   f3f6eeeb52da7       kube-scheduler-embed-certs-285966            kube-system
	cf3e22052dcdc       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      35 seconds ago      Running             etcd                      0                   b4b02938c7bfc       etcd-embed-certs-285966                      kube-system
	
	
	==> coredns [fbfb5b410da2f17e44a48edf081a593d7212e7240b01a58e0a02e61fb3356bc2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41103 - 31588 "HINFO IN 4694964725093344753.7031101248620318544. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016619243s
	
	
	==> describe nodes <==
	Name:               embed-certs-285966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-285966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=embed-certs-285966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_56_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:56:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-285966
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:57:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:57:01 +0000   Sun, 11 Jan 2026 07:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:57:01 +0000   Sun, 11 Jan 2026 07:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:57:01 +0000   Sun, 11 Jan 2026 07:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:57:01 +0000   Sun, 11 Jan 2026 07:56:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-285966
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                0f057153-04bf-4365-8e21-27be17dcfc7b
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-2ffx6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-285966                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-jqj7z                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-285966             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-285966    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-2slrz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-285966             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node embed-certs-285966 event: Registered Node embed-certs-285966 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [cf3e22052dcdc6135c4f3fa844a6042869b0f512b95d26598fe9b15cfd350724] <==
	{"level":"info","ts":"2026-01-11T07:56:26.668062Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-11T07:56:26.668191Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2026-01-11T07:56:26.668326Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:56:26.668374Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:26.668893Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:26.668967Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:56:26.669004Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:26.669032Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T07:56:26.669597Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:56:26.670133Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-285966 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:56:26.670214Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:56:26.670245Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:56:26.670597Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:56:26.671026Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:56:26.670608Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:56:26.671166Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:56:26.671205Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:56:26.671243Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T07:56:26.671373Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T07:56:26.672118Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:56:26.672585Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:56:26.675243Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:56:26.675607Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T07:56:59.026433Z","caller":"traceutil/trace.go:172","msg":"trace[934568820] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"143.357133ms","start":"2026-01-11T07:56:58.883060Z","end":"2026-01-11T07:56:59.026417Z","steps":["trace[934568820] 'process raft request'  (duration: 143.232875ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-11T07:56:59.811313Z","caller":"traceutil/trace.go:172","msg":"trace[1428535888] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"128.754421ms","start":"2026-01-11T07:56:59.682539Z","end":"2026-01-11T07:56:59.811293Z","steps":["trace[1428535888] 'process raft request'  (duration: 123.958133ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:57:02 up 39 min,  0 user,  load average: 3.94, 3.45, 2.42
	Linux embed-certs-285966 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5333ebf8a25e55acf5840b3b86e66c9b0dbd6848a20427363d49576bbb67cfd8] <==
	I0111 07:56:37.873093       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:56:37.873329       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 07:56:37.873463       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:56:37.873483       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:56:37.873500       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:56:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:56:38.079341       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:56:38.079384       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:56:38.079399       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:56:38.079528       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:56:38.480558       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:56:38.480589       1 metrics.go:72] Registering metrics
	I0111 07:56:38.480923       1 controller.go:711] "Syncing nftables rules"
	I0111 07:56:48.079994       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 07:56:48.080084       1 main.go:301] handling current node
	I0111 07:56:58.081872       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 07:56:58.081928       1 main.go:301] handling current node
	
	
	==> kube-apiserver [10bbe1fa3f128c382cc0f79642b18e854e690366df5bad10cd901c81a281348c] <==
	I0111 07:56:28.173660       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:56:28.175278       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:56:28.176301       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 07:56:28.177208       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0111 07:56:28.180014       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:56:28.182759       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:56:28.212746       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:56:29.077259       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0111 07:56:29.081043       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0111 07:56:29.081061       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:56:29.543616       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:56:29.581691       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:56:29.680524       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 07:56:29.686244       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0111 07:56:29.687289       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:56:29.692175       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:56:30.112997       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:56:30.818991       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:56:30.830424       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 07:56:30.838403       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0111 07:56:35.869636       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:56:35.877639       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:56:36.065243       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:56:36.116160       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0111 07:57:01.203608       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:36976: use of closed network connection
	
	
	==> kube-controller-manager [af0e1f407a264ec717e9ce286a65b5dbfc6b731c7a550abfa36ae8867af3081e] <==
	I0111 07:56:34.927019       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.927009       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.927124       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.927397       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.927133       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.926974       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.927137       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.927127       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.927146       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.927832       1 range_allocator.go:177] "Sending events to api server"
	I0111 07:56:34.927153       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.929319       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0111 07:56:34.929343       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:56:34.929349       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.927124       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.927147       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.932485       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.935348       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.935529       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:34.939977       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-285966" podCIDRs=["10.244.0.0/24"]
	I0111 07:56:35.021730       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:35.028870       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:35.028891       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:56:35.028898       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:56:49.921551       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [9275d68b119d1c2348c619df618f72c17db3b0199d892c75360204c591b8c6da] <==
	I0111 07:56:36.540618       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:56:36.609660       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:56:36.709862       1 shared_informer.go:377] "Caches are synced"
	I0111 07:56:36.709917       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 07:56:36.710027       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:56:36.734224       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:56:36.734291       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:56:36.740278       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:56:36.740694       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:56:36.740722       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:56:36.742153       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:56:36.742546       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:56:36.742203       1 config.go:200] "Starting service config controller"
	I0111 07:56:36.742574       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:56:36.742453       1 config.go:309] "Starting node config controller"
	I0111 07:56:36.742234       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:56:36.742587       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:56:36.742595       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:56:36.742616       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:56:36.842652       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:56:36.842660       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:56:36.842795       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dba4b5a047b08ebf52a6c392efe34642a8555075b1544e1bfcdea74446f46ec3] <==
	E0111 07:56:28.138723       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 07:56:28.140285       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 07:56:28.140491       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 07:56:28.140541       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 07:56:28.140576       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 07:56:28.140701       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 07:56:28.140729       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 07:56:28.140735       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 07:56:28.140856       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 07:56:28.140873       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 07:56:28.140872       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 07:56:28.141024       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 07:56:28.977288       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 07:56:28.986249       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 07:56:29.006493       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0111 07:56:29.070136       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 07:56:29.190120       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 07:56:29.209020       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 07:56:29.235306       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 07:56:29.302642       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 07:56:29.302642       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 07:56:29.307901       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 07:56:29.353380       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 07:56:29.358361       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	I0111 07:56:31.131856       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:56:36 embed-certs-285966 kubelet[1317]: I0111 07:56:36.178391    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63080ebb-cfaf-431d-a8db-d5c25bf7e852-lib-modules\") pod \"kube-proxy-2slrz\" (UID: \"63080ebb-cfaf-431d-a8db-d5c25bf7e852\") " pod="kube-system/kube-proxy-2slrz"
	Jan 11 07:56:36 embed-certs-285966 kubelet[1317]: I0111 07:56:36.178421    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8l4m\" (UniqueName: \"kubernetes.io/projected/63080ebb-cfaf-431d-a8db-d5c25bf7e852-kube-api-access-k8l4m\") pod \"kube-proxy-2slrz\" (UID: \"63080ebb-cfaf-431d-a8db-d5c25bf7e852\") " pod="kube-system/kube-proxy-2slrz"
	Jan 11 07:56:36 embed-certs-285966 kubelet[1317]: I0111 07:56:36.178448    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/501d0d18-f10c-475b-a6e7-fe2c701a98f4-cni-cfg\") pod \"kindnet-jqj7z\" (UID: \"501d0d18-f10c-475b-a6e7-fe2c701a98f4\") " pod="kube-system/kindnet-jqj7z"
	Jan 11 07:56:36 embed-certs-285966 kubelet[1317]: I0111 07:56:36.178467    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/501d0d18-f10c-475b-a6e7-fe2c701a98f4-lib-modules\") pod \"kindnet-jqj7z\" (UID: \"501d0d18-f10c-475b-a6e7-fe2c701a98f4\") " pod="kube-system/kindnet-jqj7z"
	Jan 11 07:56:36 embed-certs-285966 kubelet[1317]: I0111 07:56:36.178489    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/501d0d18-f10c-475b-a6e7-fe2c701a98f4-xtables-lock\") pod \"kindnet-jqj7z\" (UID: \"501d0d18-f10c-475b-a6e7-fe2c701a98f4\") " pod="kube-system/kindnet-jqj7z"
	Jan 11 07:56:36 embed-certs-285966 kubelet[1317]: I0111 07:56:36.178515    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63080ebb-cfaf-431d-a8db-d5c25bf7e852-xtables-lock\") pod \"kube-proxy-2slrz\" (UID: \"63080ebb-cfaf-431d-a8db-d5c25bf7e852\") " pod="kube-system/kube-proxy-2slrz"
	Jan 11 07:56:36 embed-certs-285966 kubelet[1317]: E0111 07:56:36.395106    1317 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-285966" containerName="kube-controller-manager"
	Jan 11 07:56:36 embed-certs-285966 kubelet[1317]: I0111 07:56:36.690117    1317 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-2slrz" podStartSLOduration=0.690098548 podStartE2EDuration="690.098548ms" podCreationTimestamp="2026-01-11 07:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:56:36.68985389 +0000 UTC m=+6.123925590" watchObservedRunningTime="2026-01-11 07:56:36.690098548 +0000 UTC m=+6.124170248"
	Jan 11 07:56:41 embed-certs-285966 kubelet[1317]: E0111 07:56:41.742589    1317 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-285966" containerName="etcd"
	Jan 11 07:56:41 embed-certs-285966 kubelet[1317]: I0111 07:56:41.753317    1317 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-jqj7z" podStartSLOduration=4.570840415 podStartE2EDuration="5.753298191s" podCreationTimestamp="2026-01-11 07:56:36 +0000 UTC" firstStartedPulling="2026-01-11 07:56:36.45974805 +0000 UTC m=+5.893819732" lastFinishedPulling="2026-01-11 07:56:37.642205816 +0000 UTC m=+7.076277508" observedRunningTime="2026-01-11 07:56:38.695568653 +0000 UTC m=+8.129640365" watchObservedRunningTime="2026-01-11 07:56:41.753298191 +0000 UTC m=+11.187369891"
	Jan 11 07:56:41 embed-certs-285966 kubelet[1317]: E0111 07:56:41.802587    1317 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-285966" containerName="kube-apiserver"
	Jan 11 07:56:42 embed-certs-285966 kubelet[1317]: E0111 07:56:42.693648    1317 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-285966" containerName="kube-apiserver"
	Jan 11 07:56:45 embed-certs-285966 kubelet[1317]: E0111 07:56:45.594558    1317 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-285966" containerName="kube-scheduler"
	Jan 11 07:56:46 embed-certs-285966 kubelet[1317]: E0111 07:56:46.400349    1317 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-285966" containerName="kube-controller-manager"
	Jan 11 07:56:48 embed-certs-285966 kubelet[1317]: I0111 07:56:48.375671    1317 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 11 07:56:48 embed-certs-285966 kubelet[1317]: I0111 07:56:48.479519    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2688eeb7-8bc5-4f15-b509-5cbef9221315-config-volume\") pod \"coredns-7d764666f9-2ffx6\" (UID: \"2688eeb7-8bc5-4f15-b509-5cbef9221315\") " pod="kube-system/coredns-7d764666f9-2ffx6"
	Jan 11 07:56:48 embed-certs-285966 kubelet[1317]: I0111 07:56:48.479560    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk9f2\" (UniqueName: \"kubernetes.io/projected/2688eeb7-8bc5-4f15-b509-5cbef9221315-kube-api-access-sk9f2\") pod \"coredns-7d764666f9-2ffx6\" (UID: \"2688eeb7-8bc5-4f15-b509-5cbef9221315\") " pod="kube-system/coredns-7d764666f9-2ffx6"
	Jan 11 07:56:48 embed-certs-285966 kubelet[1317]: I0111 07:56:48.479590    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8203c244-cf17-45c3-829c-b15dd46689b6-tmp\") pod \"storage-provisioner\" (UID: \"8203c244-cf17-45c3-829c-b15dd46689b6\") " pod="kube-system/storage-provisioner"
	Jan 11 07:56:48 embed-certs-285966 kubelet[1317]: I0111 07:56:48.479692    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prclb\" (UniqueName: \"kubernetes.io/projected/8203c244-cf17-45c3-829c-b15dd46689b6-kube-api-access-prclb\") pod \"storage-provisioner\" (UID: \"8203c244-cf17-45c3-829c-b15dd46689b6\") " pod="kube-system/storage-provisioner"
	Jan 11 07:56:49 embed-certs-285966 kubelet[1317]: E0111 07:56:49.715101    1317 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-2ffx6" containerName="coredns"
	Jan 11 07:56:49 embed-certs-285966 kubelet[1317]: I0111 07:56:49.725160    1317 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.72513965 podStartE2EDuration="13.72513965s" podCreationTimestamp="2026-01-11 07:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:56:49.724861785 +0000 UTC m=+19.158933485" watchObservedRunningTime="2026-01-11 07:56:49.72513965 +0000 UTC m=+19.159211350"
	Jan 11 07:56:49 embed-certs-285966 kubelet[1317]: I0111 07:56:49.735269    1317 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-2ffx6" podStartSLOduration=13.735247687 podStartE2EDuration="13.735247687s" podCreationTimestamp="2026-01-11 07:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:56:49.734920415 +0000 UTC m=+19.168992110" watchObservedRunningTime="2026-01-11 07:56:49.735247687 +0000 UTC m=+19.169319387"
	Jan 11 07:56:50 embed-certs-285966 kubelet[1317]: E0111 07:56:50.717595    1317 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-2ffx6" containerName="coredns"
	Jan 11 07:56:51 embed-certs-285966 kubelet[1317]: E0111 07:56:51.719751    1317 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-2ffx6" containerName="coredns"
	Jan 11 07:56:52 embed-certs-285966 kubelet[1317]: I0111 07:56:52.205529    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khtnc\" (UniqueName: \"kubernetes.io/projected/b919cf59-33aa-47fd-a15a-51037a983c6e-kube-api-access-khtnc\") pod \"busybox\" (UID: \"b919cf59-33aa-47fd-a15a-51037a983c6e\") " pod="default/busybox"
	
	
	==> storage-provisioner [78d196f12d7c72eb23cd176358cf1aebf9ae7cd1f95e08f95d913d8a15a377e1] <==
	I0111 07:56:48.778944       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:56:48.789750       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:56:48.789816       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 07:56:48.792727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:48.802591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:56:48.802794       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:56:48.803022       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-285966_d6b8326e-aeb7-4b2e-b2a5-c37864d72484!
	I0111 07:56:48.803078       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba85867d-09e8-4b62-9236-97b9a3a7c673", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-285966_d6b8326e-aeb7-4b2e-b2a5-c37864d72484 became leader
	W0111 07:56:48.809141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:48.818256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:56:48.903572       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-285966_d6b8326e-aeb7-4b2e-b2a5-c37864d72484!
	W0111 07:56:50.822050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:50.828636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:52.832720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:52.837670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:54.841268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:54.848274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:56.851389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:56.855761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:58.880605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:56:59.027619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:01.031614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:01.036699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-285966 -n embed-certs-285966
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-285966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-585688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-585688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (247.454603ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:57:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-585688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-585688 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-585688 describe deploy/metrics-server -n kube-system: exit status 1 (60.709475ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-585688 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-585688
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-585688:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b",
	        "Created": "2026-01-11T07:57:00.448699199Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307993,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:57:00.487616574Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/hosts",
	        "LogPath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b-json.log",
	        "Name": "/default-k8s-diff-port-585688",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-585688:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-585688",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b",
	                "LowerDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-585688",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-585688/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-585688",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-585688",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-585688",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1235bd4cb3b93258790e4ed0502b41eea611bb1b34d8eb5eeb866cde952810ba",
	            "SandboxKey": "/var/run/docker/netns/1235bd4cb3b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-585688": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a19185e10f04402d7102cb5e5f5b024dff862ce9b51ccbccdcfe21cef3be911e",
	                    "EndpointID": "3ecc8c3af00ad806e4b11053cecdc5d94ded971a4b61047d21030746875a71c8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "6a:fd:cb:96:9a:4a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-585688",
	                        "4e491c631d37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-585688 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-585688 logs -n 25: (1.0088686s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-181803 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ ssh     │ -p bridge-181803 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ stop    │ -p old-k8s-version-086314 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ ssh     │ -p bridge-181803 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo containerd config dump                                                                                                                                                                                                  │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo crio config                                                                                                                                                                                                             │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p bridge-181803                                                                                                                                                                                                                              │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p disable-driver-mounts-768607                                                                                                                                                                                                               │ disable-driver-mounts-768607 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ stop    │ -p no-preload-875594 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-285966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p embed-certs-285966 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-086314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p old-k8s-version-086314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-875594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p no-preload-875594 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-285966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-585688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:57:19
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:57:19.762086  314330 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:57:19.762324  314330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:57:19.762333  314330 out.go:374] Setting ErrFile to fd 2...
	I0111 07:57:19.762337  314330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:57:19.762495  314330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:57:19.762961  314330 out.go:368] Setting JSON to false
	I0111 07:57:19.764137  314330 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2388,"bootTime":1768115852,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:57:19.764189  314330 start.go:143] virtualization: kvm guest
	I0111 07:57:19.766263  314330 out.go:179] * [embed-certs-285966] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:57:19.768009  314330 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:57:19.768046  314330 notify.go:221] Checking for updates...
	I0111 07:57:19.770632  314330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:57:19.771934  314330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:57:19.773247  314330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:57:19.774459  314330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:57:19.775728  314330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:57:19.777454  314330 config.go:182] Loaded profile config "embed-certs-285966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:57:19.778142  314330 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:57:19.809699  314330 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:57:19.810031  314330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:57:19.878502  314330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2026-01-11 07:57:19.866213945 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:57:19.878645  314330 docker.go:319] overlay module found
	I0111 07:57:19.884206  314330 out.go:179] * Using the docker driver based on existing profile
	I0111 07:57:15.724180  306327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 07:57:15.724247  306327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:57:15.724281  306327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-585688 minikube.k8s.io/updated_at=2026_01_11T07_57_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=default-k8s-diff-port-585688 minikube.k8s.io/primary=true
	I0111 07:57:15.735339  306327 ops.go:34] apiserver oom_adj: -16
	I0111 07:57:15.807153  306327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:57:16.307403  306327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:57:16.807451  306327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:57:17.308041  306327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:57:17.807521  306327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:57:18.308080  306327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:57:18.807920  306327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:57:19.307904  306327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:57:19.808029  306327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:57:19.893216  306327 kubeadm.go:1114] duration metric: took 4.169035612s to wait for elevateKubeSystemPrivileges
	I0111 07:57:19.893249  306327 kubeadm.go:403] duration metric: took 12.331475466s to StartCluster
	I0111 07:57:19.893272  306327 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:57:19.893635  306327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:57:19.895156  306327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:57:19.895483  306327 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:57:19.895536  306327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 07:57:19.895565  306327 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:57:19.895744  306327 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-585688"
	I0111 07:57:19.895766  306327 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-585688"
	I0111 07:57:19.895768  306327 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-585688"
	I0111 07:57:19.895814  306327 host.go:66] Checking if "default-k8s-diff-port-585688" exists ...
	I0111 07:57:19.895844  306327 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-585688"
	I0111 07:57:19.895979  306327 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:57:19.896207  306327 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:57:19.896361  306327 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:57:19.898301  306327 out.go:179] * Verifying Kubernetes components...
	I0111 07:57:19.900045  306327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:57:19.929042  306327 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:57:19.885992  314330 start.go:309] selected driver: docker
	I0111 07:57:19.886009  314330 start.go:928] validating driver "docker" against &{Name:embed-certs-285966 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-285966 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:57:19.886104  314330 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:57:19.886972  314330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:57:20.008775  314330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2026-01-11 07:57:19.9957517 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:57:20.009241  314330 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:57:20.009312  314330 cni.go:84] Creating CNI manager for ""
	I0111 07:57:20.009373  314330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:57:20.009423  314330 start.go:353] cluster config:
	{Name:embed-certs-285966 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-285966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:57:20.011113  314330 out.go:179] * Starting "embed-certs-285966" primary control-plane node in "embed-certs-285966" cluster
	I0111 07:57:20.012338  314330 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:57:20.013567  314330 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:57:20.014775  314330 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:57:20.014840  314330 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:57:20.014866  314330 cache.go:65] Caching tarball of preloaded images
	I0111 07:57:20.014863  314330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:57:20.014978  314330 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:57:20.014996  314330 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:57:20.015114  314330 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/config.json ...
	I0111 07:57:20.042640  314330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:57:20.042665  314330 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:57:20.042687  314330 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:57:20.042726  314330 start.go:360] acquireMachinesLock for embed-certs-285966: {Name:mk50d8c197b424b03264cb0566254a8e652b1255 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:57:20.042787  314330 start.go:364] duration metric: took 40.02µs to acquireMachinesLock for "embed-certs-285966"
	I0111 07:57:20.042876  314330 start.go:96] Skipping create...Using existing machine configuration
	I0111 07:57:20.042890  314330 fix.go:54] fixHost starting: 
	I0111 07:57:20.043192  314330 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:57:20.066420  314330 fix.go:112] recreateIfNeeded on embed-certs-285966: state=Stopped err=<nil>
	W0111 07:57:20.066462  314330 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 07:57:15.353015  312970 out.go:252] * Restarting existing docker container for "no-preload-875594" ...
	I0111 07:57:15.353078  312970 cli_runner.go:164] Run: docker start no-preload-875594
	I0111 07:57:15.621686  312970 cli_runner.go:164] Run: docker container inspect no-preload-875594 --format={{.State.Status}}
	I0111 07:57:15.642902  312970 kic.go:430] container "no-preload-875594" state is running.
	I0111 07:57:15.643403  312970 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-875594
	I0111 07:57:15.667424  312970 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/config.json ...
	I0111 07:57:15.667763  312970 machine.go:94] provisionDockerMachine start ...
	I0111 07:57:15.667878  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:15.689021  312970 main.go:144] libmachine: Using SSH client type: native
	I0111 07:57:15.689271  312970 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0111 07:57:15.689288  312970 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 07:57:15.689854  312970 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39334->127.0.0.1:33113: read: connection reset by peer
	I0111 07:57:18.838152  312970 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-875594
	
	I0111 07:57:18.838177  312970 ubuntu.go:182] provisioning hostname "no-preload-875594"
	I0111 07:57:18.838227  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:18.859473  312970 main.go:144] libmachine: Using SSH client type: native
	I0111 07:57:18.859735  312970 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0111 07:57:18.859750  312970 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-875594 && echo "no-preload-875594" | sudo tee /etc/hostname
	I0111 07:57:19.027118  312970 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-875594
	
	I0111 07:57:19.027221  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:19.048318  312970 main.go:144] libmachine: Using SSH client type: native
	I0111 07:57:19.048582  312970 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0111 07:57:19.048610  312970 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-875594' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-875594/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-875594' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 07:57:19.215655  312970 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 07:57:19.215686  312970 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-4689/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-4689/.minikube}
	I0111 07:57:19.215712  312970 ubuntu.go:190] setting up certificates
	I0111 07:57:19.215724  312970 provision.go:84] configureAuth start
	I0111 07:57:19.215841  312970 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-875594
	I0111 07:57:19.234831  312970 provision.go:143] copyHostCerts
	I0111 07:57:19.234911  312970 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem, removing ...
	I0111 07:57:19.234927  312970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem
	I0111 07:57:19.235019  312970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem (1082 bytes)
	I0111 07:57:19.235155  312970 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem, removing ...
	I0111 07:57:19.235167  312970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem
	I0111 07:57:19.235206  312970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem (1123 bytes)
	I0111 07:57:19.235288  312970 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem, removing ...
	I0111 07:57:19.235297  312970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem
	I0111 07:57:19.235326  312970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem (1675 bytes)
	I0111 07:57:19.235397  312970 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem org=jenkins.no-preload-875594 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-875594]
	I0111 07:57:19.302526  312970 provision.go:177] copyRemoteCerts
	I0111 07:57:19.302595  312970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 07:57:19.302637  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:19.322109  312970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/no-preload-875594/id_rsa Username:docker}
	I0111 07:57:19.437213  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 07:57:19.459590  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 07:57:19.494600  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 07:57:19.512937  312970 provision.go:87] duration metric: took 297.194983ms to configureAuth
	I0111 07:57:19.512966  312970 ubuntu.go:206] setting minikube options for container-runtime
	I0111 07:57:19.513138  312970 config.go:182] Loaded profile config "no-preload-875594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:57:19.513226  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:19.534315  312970 main.go:144] libmachine: Using SSH client type: native
	I0111 07:57:19.534578  312970 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0111 07:57:19.534609  312970 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 07:57:19.908312  312970 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 07:57:19.908341  312970 machine.go:97] duration metric: took 4.240557146s to provisionDockerMachine
	I0111 07:57:19.908356  312970 start.go:293] postStartSetup for "no-preload-875594" (driver="docker")
	I0111 07:57:19.908388  312970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 07:57:19.908452  312970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 07:57:19.908513  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:19.954746  312970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/no-preload-875594/id_rsa Username:docker}
	I0111 07:57:20.082977  312970 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 07:57:20.089964  312970 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 07:57:20.089999  312970 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 07:57:20.090013  312970 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/addons for local assets ...
	I0111 07:57:20.090072  312970 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/files for local assets ...
	I0111 07:57:20.090187  312970 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem -> 82382.pem in /etc/ssl/certs
	I0111 07:57:20.090315  312970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 07:57:19.931014  306327 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:57:19.931038  306327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:57:19.931101  306327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:57:19.936761  306327 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-585688"
	I0111 07:57:19.936833  306327 host.go:66] Checking if "default-k8s-diff-port-585688" exists ...
	I0111 07:57:19.937341  306327 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:57:19.972068  306327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:57:19.973795  306327 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:57:19.973826  306327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:57:19.973890  306327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:57:20.001059  306327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:57:20.014433  306327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 07:57:20.076469  306327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:57:20.111292  306327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:57:20.127281  306327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:57:20.240880  306327 start.go:987] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0111 07:57:20.242576  306327 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-585688" to be "Ready" ...
	I0111 07:57:20.500963  306327 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0111 07:57:20.502160  306327 addons.go:530] duration metric: took 606.59138ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0111 07:57:20.103753  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:57:20.128457  312970 start.go:296] duration metric: took 220.086328ms for postStartSetup
	I0111 07:57:20.128541  312970 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:57:20.128599  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:20.157723  312970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/no-preload-875594/id_rsa Username:docker}
	I0111 07:57:20.272095  312970 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 07:57:20.280214  312970 fix.go:56] duration metric: took 4.947684322s for fixHost
	I0111 07:57:20.280243  312970 start.go:83] releasing machines lock for "no-preload-875594", held for 4.947735063s
	I0111 07:57:20.280309  312970 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-875594
	I0111 07:57:20.307610  312970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 07:57:20.307694  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:20.307915  312970 ssh_runner.go:195] Run: cat /version.json
	I0111 07:57:20.309867  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:20.336575  312970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/no-preload-875594/id_rsa Username:docker}
	I0111 07:57:20.336903  312970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/no-preload-875594/id_rsa Username:docker}
	I0111 07:57:20.515496  312970 ssh_runner.go:195] Run: systemctl --version
	I0111 07:57:20.522183  312970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 07:57:20.565094  312970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 07:57:20.571905  312970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 07:57:20.571981  312970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 07:57:20.579883  312970 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 07:57:20.579903  312970 start.go:496] detecting cgroup driver to use...
	I0111 07:57:20.579931  312970 detect.go:178] detected "systemd" cgroup driver on host os
	I0111 07:57:20.579978  312970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 07:57:20.594141  312970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 07:57:20.606731  312970 docker.go:218] disabling cri-docker service (if available) ...
	I0111 07:57:20.606782  312970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 07:57:20.622440  312970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 07:57:20.641592  312970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 07:57:20.764524  312970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 07:57:20.854482  312970 docker.go:234] disabling docker service ...
	I0111 07:57:20.854550  312970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 07:57:20.871532  312970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 07:57:20.886739  312970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 07:57:21.003331  312970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 07:57:21.121869  312970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 07:57:21.139404  312970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 07:57:21.158094  312970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 07:57:21.158165  312970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:21.169978  312970 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 07:57:21.170052  312970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:21.181093  312970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:21.193430  312970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:21.205171  312970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 07:57:21.216085  312970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:21.227220  312970 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:21.237452  312970 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:21.249010  312970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 07:57:21.258522  312970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 07:57:21.268310  312970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:57:21.380406  312970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 07:57:21.567943  312970 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 07:57:21.568022  312970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 07:57:21.573466  312970 start.go:574] Will wait 60s for crictl version
	I0111 07:57:21.573534  312970 ssh_runner.go:195] Run: which crictl
	I0111 07:57:21.578521  312970 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 07:57:21.610366  312970 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 07:57:21.610454  312970 ssh_runner.go:195] Run: crio --version
	I0111 07:57:21.647700  312970 ssh_runner.go:195] Run: crio --version
	I0111 07:57:21.685219  312970 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 07:57:18.029596  310739 addons.go:530] duration metric: took 3.216008345s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I0111 07:57:18.520972  310739 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:57:18.525063  310739 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0111 07:57:18.526271  310739 api_server.go:141] control plane version: v1.28.0
	I0111 07:57:18.526307  310739 api_server.go:131] duration metric: took 506.116252ms to wait for apiserver health ...
	I0111 07:57:18.526318  310739 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:57:18.530444  310739 system_pods.go:59] 8 kube-system pods found
	I0111 07:57:18.530494  310739 system_pods.go:61] "coredns-5dd5756b68-797jn" [ac6485e5-f57c-40c4-aa87-a724ccfd7fd3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:57:18.530508  310739 system_pods.go:61] "etcd-old-k8s-version-086314" [dc8173a2-cd5c-4fd5-8bc3-167b8214b512] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:57:18.530522  310739 system_pods.go:61] "kindnet-9sfd9" [efc9328f-933c-4a16-a505-312eb58fdb57] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0111 07:57:18.530537  310739 system_pods.go:61] "kube-apiserver-old-k8s-version-086314" [6c910d3a-ed92-4c51-b482-fb1a13ebe4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:18.530559  310739 system_pods.go:61] "kube-controller-manager-old-k8s-version-086314" [e235516a-7365-4893-adca-12c49d8f33ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:57:18.530575  310739 system_pods.go:61] "kube-proxy-dl9xq" [49020e24-0f40-4ef2-93a3-d3d2938acea7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:57:18.530585  310739 system_pods.go:61] "kube-scheduler-old-k8s-version-086314" [d5bc68e0-ee93-45db-81eb-4eee164c7a88] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:57:18.530594  310739 system_pods.go:61] "storage-provisioner" [56946fe8-d095-4fe2-8742-0dc984828edf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:57:18.530600  310739 system_pods.go:74] duration metric: took 4.277253ms to wait for pod list to return data ...
	I0111 07:57:18.530610  310739 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:57:18.532953  310739 default_sa.go:45] found service account: "default"
	I0111 07:57:18.532975  310739 default_sa.go:55] duration metric: took 2.354372ms for default service account to be created ...
	I0111 07:57:18.532984  310739 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:57:18.536161  310739 system_pods.go:86] 8 kube-system pods found
	I0111 07:57:18.536187  310739 system_pods.go:89] "coredns-5dd5756b68-797jn" [ac6485e5-f57c-40c4-aa87-a724ccfd7fd3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:57:18.536194  310739 system_pods.go:89] "etcd-old-k8s-version-086314" [dc8173a2-cd5c-4fd5-8bc3-167b8214b512] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:57:18.536200  310739 system_pods.go:89] "kindnet-9sfd9" [efc9328f-933c-4a16-a505-312eb58fdb57] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0111 07:57:18.536206  310739 system_pods.go:89] "kube-apiserver-old-k8s-version-086314" [6c910d3a-ed92-4c51-b482-fb1a13ebe4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:18.536211  310739 system_pods.go:89] "kube-controller-manager-old-k8s-version-086314" [e235516a-7365-4893-adca-12c49d8f33ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:57:18.536218  310739 system_pods.go:89] "kube-proxy-dl9xq" [49020e24-0f40-4ef2-93a3-d3d2938acea7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:57:18.536223  310739 system_pods.go:89] "kube-scheduler-old-k8s-version-086314" [d5bc68e0-ee93-45db-81eb-4eee164c7a88] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:57:18.536229  310739 system_pods.go:89] "storage-provisioner" [56946fe8-d095-4fe2-8742-0dc984828edf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:57:18.536235  310739 system_pods.go:126] duration metric: took 3.246203ms to wait for k8s-apps to be running ...
	I0111 07:57:18.536244  310739 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:57:18.536281  310739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:57:18.549711  310739 system_svc.go:56] duration metric: took 13.46005ms WaitForService to wait for kubelet
	I0111 07:57:18.549739  310739 kubeadm.go:587] duration metric: took 3.736181748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:57:18.549768  310739 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:57:18.552780  310739 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:57:18.552820  310739 node_conditions.go:123] node cpu capacity is 8
	I0111 07:57:18.552836  310739 node_conditions.go:105] duration metric: took 3.062436ms to run NodePressure ...
	I0111 07:57:18.552848  310739 start.go:242] waiting for startup goroutines ...
	I0111 07:57:18.552855  310739 start.go:247] waiting for cluster config update ...
	I0111 07:57:18.552865  310739 start.go:256] writing updated cluster config ...
	I0111 07:57:18.553107  310739 ssh_runner.go:195] Run: rm -f paused
	I0111 07:57:18.557176  310739 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:57:18.561734  310739 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-797jn" in "kube-system" namespace to be "Ready" or be gone ...
	W0111 07:57:20.570922  310739 pod_ready.go:104] pod "coredns-5dd5756b68-797jn" is not "Ready", error: <nil>
	I0111 07:57:21.686377  312970 cli_runner.go:164] Run: docker network inspect no-preload-875594 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:57:21.711575  312970 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 07:57:21.717169  312970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:57:21.731752  312970 kubeadm.go:884] updating cluster {Name:no-preload-875594 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-875594 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 07:57:21.731911  312970 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:57:21.731955  312970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:57:21.774826  312970 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:57:21.774852  312970 cache_images.go:86] Images are preloaded, skipping loading
	I0111 07:57:21.774861  312970 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0111 07:57:21.774969  312970 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-875594 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-875594 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 07:57:21.775033  312970 ssh_runner.go:195] Run: crio config
	I0111 07:57:21.831075  312970 cni.go:84] Creating CNI manager for ""
	I0111 07:57:21.831099  312970 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:57:21.831113  312970 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 07:57:21.831144  312970 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-875594 NodeName:no-preload-875594 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 07:57:21.831324  312970 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-875594"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 07:57:21.831399  312970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 07:57:21.840350  312970 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 07:57:21.840423  312970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 07:57:21.849368  312970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0111 07:57:21.864226  312970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 07:57:21.877565  312970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0111 07:57:21.890688  312970 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 07:57:21.894753  312970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:57:21.906212  312970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:57:21.997162  312970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:57:22.024238  312970 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594 for IP: 192.168.76.2
	I0111 07:57:22.024263  312970 certs.go:195] generating shared ca certs ...
	I0111 07:57:22.024283  312970 certs.go:227] acquiring lock for ca certs: {Name:mk9f7a57f61b85f2c0f52e44b6a926769e368f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:57:22.024419  312970 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key
	I0111 07:57:22.024457  312970 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key
	I0111 07:57:22.024464  312970 certs.go:257] generating profile certs ...
	I0111 07:57:22.024537  312970 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/client.key
	I0111 07:57:22.024594  312970 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.key.cd03be7e
	I0111 07:57:22.024631  312970 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/proxy-client.key
	I0111 07:57:22.024750  312970 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem (1338 bytes)
	W0111 07:57:22.024785  312970 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238_empty.pem, impossibly tiny 0 bytes
	I0111 07:57:22.024792  312970 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 07:57:22.024841  312970 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem (1082 bytes)
	I0111 07:57:22.024873  312970 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem (1123 bytes)
	I0111 07:57:22.024916  312970 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem (1675 bytes)
	I0111 07:57:22.024958  312970 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:57:22.025513  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 07:57:22.045972  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 07:57:22.066452  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 07:57:22.086734  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 07:57:22.110372  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0111 07:57:22.128356  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0111 07:57:22.145433  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 07:57:22.162640  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/no-preload-875594/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 07:57:22.179576  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem --> /usr/share/ca-certificates/8238.pem (1338 bytes)
	I0111 07:57:22.197171  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /usr/share/ca-certificates/82382.pem (1708 bytes)
	I0111 07:57:22.215179  312970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 07:57:22.233291  312970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 07:57:22.246487  312970 ssh_runner.go:195] Run: openssl version
	I0111 07:57:22.253112  312970 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8238.pem
	I0111 07:57:22.260761  312970 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8238.pem /etc/ssl/certs/8238.pem
	I0111 07:57:22.268682  312970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8238.pem
	I0111 07:57:22.272696  312970 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:27 /usr/share/ca-certificates/8238.pem
	I0111 07:57:22.272743  312970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8238.pem
	I0111 07:57:22.308362  312970 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 07:57:22.316431  312970 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/82382.pem
	I0111 07:57:22.324021  312970 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/82382.pem /etc/ssl/certs/82382.pem
	I0111 07:57:22.331291  312970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82382.pem
	I0111 07:57:22.335130  312970 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:27 /usr/share/ca-certificates/82382.pem
	I0111 07:57:22.335190  312970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82382.pem
	I0111 07:57:22.371740  312970 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 07:57:22.380074  312970 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:57:22.387757  312970 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 07:57:22.395318  312970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:57:22.398978  312970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:23 /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:57:22.399021  312970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:57:22.434858  312970 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 07:57:22.443277  312970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 07:57:22.447362  312970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 07:57:22.482469  312970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 07:57:22.520555  312970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 07:57:22.569645  312970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 07:57:22.615336  312970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 07:57:22.668289  312970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 07:57:22.719998  312970 kubeadm.go:401] StartCluster: {Name:no-preload-875594 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-875594 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:57:22.720095  312970 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:57:22.720142  312970 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:57:22.749466  312970 cri.go:96] found id: "86dd9362e4e980669196db9a6fc9ed3859ae025e5b431399b21fd17fef766db2"
	I0111 07:57:22.749493  312970 cri.go:96] found id: "04186244a4d8af4b1401de6299129517961f7e404cad46c3f8e49cbbbe78c312"
	I0111 07:57:22.749500  312970 cri.go:96] found id: "a56589dc7dd4d5b92389fcdb2961a8b65557ca9359492ffc4fb9f790b05e4dd4"
	I0111 07:57:22.749505  312970 cri.go:96] found id: "8198831067c65d7e7b8f32225ee8852e7d6229fee77dc266d93b4d5a687f2cd3"
	I0111 07:57:22.749509  312970 cri.go:96] found id: ""
	I0111 07:57:22.749558  312970 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 07:57:22.762010  312970 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:57:22Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:57:22.762083  312970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 07:57:22.770610  312970 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 07:57:22.770628  312970 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 07:57:22.770671  312970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 07:57:22.778317  312970 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:57:22.779118  312970 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-875594" does not appear in /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:57:22.779591  312970 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-4689/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-875594" cluster setting kubeconfig missing "no-preload-875594" context setting]
	I0111 07:57:22.780372  312970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:57:22.782188  312970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 07:57:22.791264  312970 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0111 07:57:22.791299  312970 kubeadm.go:602] duration metric: took 20.661879ms to restartPrimaryControlPlane
	I0111 07:57:22.791310  312970 kubeadm.go:403] duration metric: took 71.321339ms to StartCluster
	I0111 07:57:22.791325  312970 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:57:22.791391  312970 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:57:22.792776  312970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:57:22.793037  312970 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:57:22.793099  312970 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:57:22.793207  312970 addons.go:70] Setting storage-provisioner=true in profile "no-preload-875594"
	I0111 07:57:22.793224  312970 addons.go:239] Setting addon storage-provisioner=true in "no-preload-875594"
	W0111 07:57:22.793232  312970 addons.go:248] addon storage-provisioner should already be in state true
	I0111 07:57:22.793260  312970 host.go:66] Checking if "no-preload-875594" exists ...
	I0111 07:57:22.793279  312970 config.go:182] Loaded profile config "no-preload-875594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:57:22.793283  312970 addons.go:70] Setting dashboard=true in profile "no-preload-875594"
	I0111 07:57:22.793305  312970 addons.go:239] Setting addon dashboard=true in "no-preload-875594"
	W0111 07:57:22.793313  312970 addons.go:248] addon dashboard should already be in state true
	I0111 07:57:22.793319  312970 addons.go:70] Setting default-storageclass=true in profile "no-preload-875594"
	I0111 07:57:22.793347  312970 host.go:66] Checking if "no-preload-875594" exists ...
	I0111 07:57:22.793350  312970 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-875594"
	I0111 07:57:22.793654  312970 cli_runner.go:164] Run: docker container inspect no-preload-875594 --format={{.State.Status}}
	I0111 07:57:22.793763  312970 cli_runner.go:164] Run: docker container inspect no-preload-875594 --format={{.State.Status}}
	I0111 07:57:22.793772  312970 cli_runner.go:164] Run: docker container inspect no-preload-875594 --format={{.State.Status}}
	I0111 07:57:22.795416  312970 out.go:179] * Verifying Kubernetes components...
	I0111 07:57:22.796792  312970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:57:22.820233  312970 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:57:22.821711  312970 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:57:22.821823  312970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:57:22.821900  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:22.822965  312970 addons.go:239] Setting addon default-storageclass=true in "no-preload-875594"
	I0111 07:57:22.823034  312970 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W0111 07:57:22.823040  312970 addons.go:248] addon default-storageclass should already be in state true
	I0111 07:57:22.823109  312970 host.go:66] Checking if "no-preload-875594" exists ...
	I0111 07:57:22.823555  312970 cli_runner.go:164] Run: docker container inspect no-preload-875594 --format={{.State.Status}}
	I0111 07:57:22.825197  312970 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 07:57:20.068173  314330 out.go:252] * Restarting existing docker container for "embed-certs-285966" ...
	I0111 07:57:20.068273  314330 cli_runner.go:164] Run: docker start embed-certs-285966
	I0111 07:57:20.417752  314330 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:57:20.442100  314330 kic.go:430] container "embed-certs-285966" state is running.
	I0111 07:57:20.442555  314330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-285966
	I0111 07:57:20.466366  314330 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/config.json ...
	I0111 07:57:20.466640  314330 machine.go:94] provisionDockerMachine start ...
	I0111 07:57:20.466723  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:20.489328  314330 main.go:144] libmachine: Using SSH client type: native
	I0111 07:57:20.489674  314330 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0111 07:57:20.489694  314330 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 07:57:20.490658  314330 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54022->127.0.0.1:33118: read: connection reset by peer
	I0111 07:57:23.636903  314330 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-285966
	
	I0111 07:57:23.636931  314330 ubuntu.go:182] provisioning hostname "embed-certs-285966"
	I0111 07:57:23.636984  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:23.654895  314330 main.go:144] libmachine: Using SSH client type: native
	I0111 07:57:23.655132  314330 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0111 07:57:23.655155  314330 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-285966 && echo "embed-certs-285966" | sudo tee /etc/hostname
	I0111 07:57:23.819458  314330 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-285966
	
	I0111 07:57:23.819549  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:23.840726  314330 main.go:144] libmachine: Using SSH client type: native
	I0111 07:57:23.840960  314330 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0111 07:57:23.840978  314330 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-285966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-285966/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-285966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 07:57:23.998001  314330 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 07:57:23.998047  314330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-4689/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-4689/.minikube}
	I0111 07:57:23.998068  314330 ubuntu.go:190] setting up certificates
	I0111 07:57:23.998081  314330 provision.go:84] configureAuth start
	I0111 07:57:23.998149  314330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-285966
	I0111 07:57:24.021752  314330 provision.go:143] copyHostCerts
	I0111 07:57:24.021842  314330 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem, removing ...
	I0111 07:57:24.021864  314330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem
	I0111 07:57:24.021942  314330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem (1082 bytes)
	I0111 07:57:24.022044  314330 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem, removing ...
	I0111 07:57:24.022053  314330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem
	I0111 07:57:24.022087  314330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem (1123 bytes)
	I0111 07:57:24.022157  314330 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem, removing ...
	I0111 07:57:24.022166  314330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem
	I0111 07:57:24.022191  314330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem (1675 bytes)
	I0111 07:57:24.022242  314330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem org=jenkins.embed-certs-285966 san=[127.0.0.1 192.168.85.2 embed-certs-285966 localhost minikube]
	I0111 07:57:24.142793  314330 provision.go:177] copyRemoteCerts
	I0111 07:57:24.142870  314330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 07:57:24.142930  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:24.162159  314330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:57:24.269671  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 07:57:24.288570  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0111 07:57:24.311980  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 07:57:24.332761  314330 provision.go:87] duration metric: took 334.665445ms to configureAuth
	I0111 07:57:24.332793  314330 ubuntu.go:206] setting minikube options for container-runtime
	I0111 07:57:24.333036  314330 config.go:182] Loaded profile config "embed-certs-285966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:57:24.333173  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:24.354098  314330 main.go:144] libmachine: Using SSH client type: native
	I0111 07:57:24.354437  314330 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0111 07:57:24.354462  314330 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 07:57:22.826155  312970 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 07:57:22.826175  312970 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 07:57:22.826228  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:22.852926  312970 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:57:22.852955  312970 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:57:22.853021  312970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:57:22.860032  312970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/no-preload-875594/id_rsa Username:docker}
	I0111 07:57:22.860719  312970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/no-preload-875594/id_rsa Username:docker}
	I0111 07:57:22.882594  312970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/no-preload-875594/id_rsa Username:docker}
	I0111 07:57:22.967340  312970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:57:22.984180  312970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:57:22.985400  312970 node_ready.go:35] waiting up to 6m0s for node "no-preload-875594" to be "Ready" ...
	I0111 07:57:22.990076  312970 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 07:57:22.990096  312970 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 07:57:23.004579  312970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:57:23.006666  312970 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 07:57:23.006734  312970 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 07:57:23.022435  312970 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 07:57:23.022507  312970 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 07:57:23.038045  312970 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 07:57:23.038068  312970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 07:57:23.053394  312970 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 07:57:23.053419  312970 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 07:57:23.069975  312970 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 07:57:23.069997  312970 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 07:57:23.083658  312970 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 07:57:23.083683  312970 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 07:57:23.096387  312970 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 07:57:23.096414  312970 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 07:57:23.108732  312970 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 07:57:23.108753  312970 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 07:57:23.123407  312970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 07:57:24.699494  312970 node_ready.go:49] node "no-preload-875594" is "Ready"
	I0111 07:57:24.699539  312970 node_ready.go:38] duration metric: took 1.714102765s for node "no-preload-875594" to be "Ready" ...
	I0111 07:57:24.699558  312970 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:57:24.699620  312970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:57:25.264148  312970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.259529353s)
	I0111 07:57:25.264368  312970 api_server.go:72] duration metric: took 2.471300067s to wait for apiserver process to appear ...
	I0111 07:57:25.264395  312970 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:57:25.264415  312970 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 07:57:25.264746  312970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.14086122s)
	I0111 07:57:25.265930  312970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.281716424s)
	I0111 07:57:25.268057  312970 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-875594 addons enable metrics-server
	
	I0111 07:57:25.271876  312970 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:57:25.271910  312970 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:57:25.276300  312970 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0111 07:57:20.745674  306327 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-585688" context rescaled to 1 replicas
	W0111 07:57:22.246322  306327 node_ready.go:57] node "default-k8s-diff-port-585688" has "Ready":"False" status (will retry)
	W0111 07:57:24.246663  306327 node_ready.go:57] node "default-k8s-diff-port-585688" has "Ready":"False" status (will retry)
	I0111 07:57:24.924868  314330 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 07:57:24.924898  314330 machine.go:97] duration metric: took 4.458237063s to provisionDockerMachine
	I0111 07:57:24.924915  314330 start.go:293] postStartSetup for "embed-certs-285966" (driver="docker")
	I0111 07:57:24.924929  314330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 07:57:24.925011  314330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 07:57:24.925072  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:24.948715  314330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:57:25.056237  314330 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 07:57:25.060877  314330 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 07:57:25.060914  314330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 07:57:25.060929  314330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/addons for local assets ...
	I0111 07:57:25.061009  314330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/files for local assets ...
	I0111 07:57:25.061153  314330 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem -> 82382.pem in /etc/ssl/certs
	I0111 07:57:25.061345  314330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 07:57:25.072432  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:57:25.096447  314330 start.go:296] duration metric: took 171.515201ms for postStartSetup
	I0111 07:57:25.096535  314330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:57:25.096605  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:25.123257  314330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:57:25.232599  314330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 07:57:25.237993  314330 fix.go:56] duration metric: took 5.195096268s for fixHost
	I0111 07:57:25.238022  314330 start.go:83] releasing machines lock for "embed-certs-285966", held for 5.195221226s
	I0111 07:57:25.238090  314330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-285966
	I0111 07:57:25.274081  314330 ssh_runner.go:195] Run: cat /version.json
	I0111 07:57:25.274132  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:25.274141  314330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 07:57:25.274221  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:25.294463  314330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:57:25.294814  314330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:57:25.461990  314330 ssh_runner.go:195] Run: systemctl --version
	I0111 07:57:25.470450  314330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 07:57:25.520404  314330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 07:57:25.526010  314330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 07:57:25.526102  314330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 07:57:25.535554  314330 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 07:57:25.535582  314330 start.go:496] detecting cgroup driver to use...
	I0111 07:57:25.535616  314330 detect.go:178] detected "systemd" cgroup driver on host os
	I0111 07:57:25.535689  314330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 07:57:25.550138  314330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 07:57:25.563298  314330 docker.go:218] disabling cri-docker service (if available) ...
	I0111 07:57:25.563361  314330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 07:57:25.581052  314330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 07:57:25.601978  314330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 07:57:25.702183  314330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 07:57:25.813319  314330 docker.go:234] disabling docker service ...
	I0111 07:57:25.813386  314330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 07:57:25.832279  314330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 07:57:25.849526  314330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 07:57:25.952537  314330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 07:57:26.037304  314330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 07:57:26.049968  314330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 07:57:26.064692  314330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 07:57:26.064759  314330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:26.074969  314330 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 07:57:26.075050  314330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:26.083836  314330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:26.092639  314330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:26.101118  314330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 07:57:26.108739  314330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:26.117366  314330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:26.125841  314330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:57:26.134328  314330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 07:57:26.143852  314330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 07:57:26.152918  314330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:57:26.243204  314330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 07:57:26.394277  314330 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 07:57:26.394335  314330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 07:57:26.398364  314330 start.go:574] Will wait 60s for crictl version
	I0111 07:57:26.398412  314330 ssh_runner.go:195] Run: which crictl
	I0111 07:57:26.401916  314330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 07:57:26.426989  314330 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 07:57:26.427094  314330 ssh_runner.go:195] Run: crio --version
	I0111 07:57:26.455028  314330 ssh_runner.go:195] Run: crio --version
	I0111 07:57:26.484547  314330 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	W0111 07:57:23.068445  310739 pod_ready.go:104] pod "coredns-5dd5756b68-797jn" is not "Ready", error: <nil>
	W0111 07:57:25.070727  310739 pod_ready.go:104] pod "coredns-5dd5756b68-797jn" is not "Ready", error: <nil>
	I0111 07:57:26.485626  314330 cli_runner.go:164] Run: docker network inspect embed-certs-285966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:57:26.503285  314330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 07:57:26.507314  314330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:57:26.518027  314330 kubeadm.go:884] updating cluster {Name:embed-certs-285966 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-285966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 07:57:26.518170  314330 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:57:26.518239  314330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:57:26.551567  314330 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:57:26.551593  314330 crio.go:433] Images already preloaded, skipping extraction
	I0111 07:57:26.551634  314330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:57:26.578238  314330 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:57:26.578266  314330 cache_images.go:86] Images are preloaded, skipping loading
	I0111 07:57:26.578276  314330 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 07:57:26.578385  314330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-285966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-285966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 07:57:26.578471  314330 ssh_runner.go:195] Run: crio config
	I0111 07:57:26.626049  314330 cni.go:84] Creating CNI manager for ""
	I0111 07:57:26.626089  314330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:57:26.626107  314330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 07:57:26.626134  314330 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-285966 NodeName:embed-certs-285966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 07:57:26.626293  314330 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-285966"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 07:57:26.626363  314330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 07:57:26.634572  314330 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 07:57:26.634648  314330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 07:57:26.641966  314330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0111 07:57:26.654279  314330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 07:57:26.666505  314330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I0111 07:57:26.679200  314330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 07:57:26.683034  314330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:57:26.693425  314330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:57:26.791200  314330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:57:26.820613  314330 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966 for IP: 192.168.85.2
	I0111 07:57:26.820635  314330 certs.go:195] generating shared ca certs ...
	I0111 07:57:26.820655  314330 certs.go:227] acquiring lock for ca certs: {Name:mk9f7a57f61b85f2c0f52e44b6a926769e368f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:57:26.820861  314330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key
	I0111 07:57:26.820938  314330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key
	I0111 07:57:26.820954  314330 certs.go:257] generating profile certs ...
	I0111 07:57:26.821058  314330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/client.key
	I0111 07:57:26.821139  314330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.key.9d1949d7
	I0111 07:57:26.821202  314330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/proxy-client.key
	I0111 07:57:26.821340  314330 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem (1338 bytes)
	W0111 07:57:26.821388  314330 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238_empty.pem, impossibly tiny 0 bytes
	I0111 07:57:26.821403  314330 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 07:57:26.821444  314330 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem (1082 bytes)
	I0111 07:57:26.821482  314330 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem (1123 bytes)
	I0111 07:57:26.821518  314330 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem (1675 bytes)
	I0111 07:57:26.821579  314330 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:57:26.822248  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 07:57:26.843545  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 07:57:26.864521  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 07:57:26.884034  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 07:57:26.908082  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0111 07:57:26.926817  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0111 07:57:26.944236  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 07:57:26.963157  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/embed-certs-285966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 07:57:26.980380  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /usr/share/ca-certificates/82382.pem (1708 bytes)
	I0111 07:57:26.998248  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 07:57:27.017077  314330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem --> /usr/share/ca-certificates/8238.pem (1338 bytes)
	I0111 07:57:27.036074  314330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 07:57:27.048552  314330 ssh_runner.go:195] Run: openssl version
	I0111 07:57:27.055458  314330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8238.pem
	I0111 07:57:27.063655  314330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8238.pem /etc/ssl/certs/8238.pem
	I0111 07:57:27.072442  314330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8238.pem
	I0111 07:57:27.076284  314330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:27 /usr/share/ca-certificates/8238.pem
	I0111 07:57:27.076334  314330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8238.pem
	I0111 07:57:27.112746  314330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 07:57:27.120695  314330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/82382.pem
	I0111 07:57:27.128931  314330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/82382.pem /etc/ssl/certs/82382.pem
	I0111 07:57:27.136417  314330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82382.pem
	I0111 07:57:27.140141  314330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:27 /usr/share/ca-certificates/82382.pem
	I0111 07:57:27.140217  314330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82382.pem
	I0111 07:57:27.178623  314330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 07:57:27.186962  314330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:57:27.194848  314330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 07:57:27.202736  314330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:57:27.207223  314330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:23 /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:57:27.207288  314330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:57:27.242426  314330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 07:57:27.251361  314330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 07:57:27.256002  314330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 07:57:27.291573  314330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 07:57:27.329575  314330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 07:57:27.375985  314330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 07:57:27.418881  314330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 07:57:27.472072  314330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 07:57:27.517915  314330 kubeadm.go:401] StartCluster: {Name:embed-certs-285966 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-285966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:57:27.518024  314330 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:57:27.518098  314330 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:57:27.553171  314330 cri.go:96] found id: "549b7752722376433db3102bf0433676cbe020d474e238af58fe8f3ce20e7cad"
	I0111 07:57:27.553197  314330 cri.go:96] found id: "922cd28382c3c7abe22799ee282d7e8cdbde7bdf06d9c7ad7752450641f086fe"
	I0111 07:57:27.553204  314330 cri.go:96] found id: "1c93a7283300525c6fa1c9642d2a153684e61da58f29768b0be4390bc81386eb"
	I0111 07:57:27.553210  314330 cri.go:96] found id: "99ff4271fc175e5592daa8e5b70c899fc351f028abff121118ae6f6deb18a8ef"
	I0111 07:57:27.553215  314330 cri.go:96] found id: ""
	I0111 07:57:27.553264  314330 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 07:57:27.567754  314330 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:57:27Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:57:27.567857  314330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 07:57:27.576846  314330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 07:57:27.576866  314330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 07:57:27.576922  314330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 07:57:27.585431  314330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:57:27.586382  314330 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-285966" does not appear in /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:57:27.587104  314330 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-4689/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-285966" cluster setting kubeconfig missing "embed-certs-285966" context setting]
	I0111 07:57:27.588174  314330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:57:27.590118  314330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 07:57:27.600829  314330 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0111 07:57:27.600859  314330 kubeadm.go:602] duration metric: took 23.987507ms to restartPrimaryControlPlane
	I0111 07:57:27.600870  314330 kubeadm.go:403] duration metric: took 82.963511ms to StartCluster
	I0111 07:57:27.600887  314330 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:57:27.600958  314330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:57:27.603070  314330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:57:27.603343  314330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:57:27.603466  314330 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:57:27.603565  314330 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-285966"
	I0111 07:57:27.603571  314330 config.go:182] Loaded profile config "embed-certs-285966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:57:27.603585  314330 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-285966"
	W0111 07:57:27.603593  314330 addons.go:248] addon storage-provisioner should already be in state true
	I0111 07:57:27.603591  314330 addons.go:70] Setting dashboard=true in profile "embed-certs-285966"
	I0111 07:57:27.603620  314330 host.go:66] Checking if "embed-certs-285966" exists ...
	I0111 07:57:27.603628  314330 addons.go:239] Setting addon dashboard=true in "embed-certs-285966"
	I0111 07:57:27.603621  314330 addons.go:70] Setting default-storageclass=true in profile "embed-certs-285966"
	I0111 07:57:27.603660  314330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-285966"
	W0111 07:57:27.603642  314330 addons.go:248] addon dashboard should already be in state true
	I0111 07:57:27.603778  314330 host.go:66] Checking if "embed-certs-285966" exists ...
	I0111 07:57:27.604010  314330 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:57:27.604100  314330 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:57:27.604747  314330 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:57:27.608275  314330 out.go:179] * Verifying Kubernetes components...
	I0111 07:57:27.610984  314330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:57:27.631486  314330 addons.go:239] Setting addon default-storageclass=true in "embed-certs-285966"
	W0111 07:57:27.631540  314330 addons.go:248] addon default-storageclass should already be in state true
	I0111 07:57:27.631575  314330 host.go:66] Checking if "embed-certs-285966" exists ...
	I0111 07:57:27.632025  314330 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:57:27.632185  314330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:57:27.632262  314330 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 07:57:27.633478  314330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:57:27.633515  314330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:57:27.633556  314330 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 07:57:27.633573  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:27.634914  314330 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 07:57:27.634932  314330 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 07:57:27.634985  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:27.670429  314330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:57:27.670829  314330 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:57:27.670867  314330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:57:27.670931  314330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:57:27.673748  314330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:57:27.694545  314330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:57:27.764008  314330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:57:27.778059  314330 node_ready.go:35] waiting up to 6m0s for node "embed-certs-285966" to be "Ready" ...
	I0111 07:57:27.789752  314330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 07:57:27.789774  314330 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 07:57:27.795138  314330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:57:27.804904  314330 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 07:57:27.804927  314330 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 07:57:27.825335  314330 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 07:57:27.825364  314330 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 07:57:27.828680  314330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:57:27.864921  314330 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 07:57:27.864948  314330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 07:57:27.889073  314330 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 07:57:27.889104  314330 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 07:57:27.906250  314330 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 07:57:27.906281  314330 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 07:57:27.928357  314330 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 07:57:27.928386  314330 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 07:57:27.947212  314330 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 07:57:27.947237  314330 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 07:57:27.962243  314330 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 07:57:27.962272  314330 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 07:57:27.976394  314330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 07:57:28.878411  314330 node_ready.go:49] node "embed-certs-285966" is "Ready"
	I0111 07:57:28.878441  314330 node_ready.go:38] duration metric: took 1.100342547s for node "embed-certs-285966" to be "Ready" ...
	I0111 07:57:28.878463  314330 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:57:28.878524  314330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:57:29.637953  314330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.842776005s)
	I0111 07:57:29.638486  314330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.809771921s)
	I0111 07:57:29.638609  314330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.662154032s)
	I0111 07:57:29.638835  314330 api_server.go:72] duration metric: took 2.035457562s to wait for apiserver process to appear ...
	I0111 07:57:29.638848  314330 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:57:29.638870  314330 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 07:57:29.641094  314330 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-285966 addons enable metrics-server
	
	I0111 07:57:29.647699  314330 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:57:29.647726  314330 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:57:29.657447  314330 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0111 07:57:29.658953  314330 addons.go:530] duration metric: took 2.055490635s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0111 07:57:25.277446  312970 addons.go:530] duration metric: took 2.484356014s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0111 07:57:25.764533  312970 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 07:57:25.769724  312970 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:57:25.769755  312970 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:57:26.264975  312970 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 07:57:26.269366  312970 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0111 07:57:26.270470  312970 api_server.go:141] control plane version: v1.35.0
	I0111 07:57:26.270512  312970 api_server.go:131] duration metric: took 1.006109417s to wait for apiserver health ...
	I0111 07:57:26.270527  312970 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:57:26.274130  312970 system_pods.go:59] 8 kube-system pods found
	I0111 07:57:26.274174  312970 system_pods.go:61] "coredns-7d764666f9-gw9b6" [fab33c8a-a60a-4148-9d33-607b857a3867] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:57:26.274202  312970 system_pods.go:61] "etcd-no-preload-875594" [b5c2e76d-30c8-4b1f-9d33-bf17408c89d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:57:26.274212  312970 system_pods.go:61] "kindnet-p7w57" [f7547d3e-e98d-42af-a0cc-557386ef7efa] Running
	I0111 07:57:26.274219  312970 system_pods.go:61] "kube-apiserver-no-preload-875594" [231873ff-977b-4cce-9b1f-5b54c160ed65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:26.274224  312970 system_pods.go:61] "kube-controller-manager-no-preload-875594" [b9c4c8b6-049a-4281-9e88-8e216de6745e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:57:26.274229  312970 system_pods.go:61] "kube-proxy-49b4x" [c9fb5a8d-61b1-4674-aafc-a24c6c468eee] Running
	I0111 07:57:26.274234  312970 system_pods.go:61] "kube-scheduler-no-preload-875594" [6e6d1393-a91c-4c18-b5aa-47716fc63a9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:57:26.274238  312970 system_pods.go:61] "storage-provisioner" [5e5ad34f-7c3c-4f85-8280-d79b3586001e] Running
	I0111 07:57:26.274247  312970 system_pods.go:74] duration metric: took 3.710815ms to wait for pod list to return data ...
	I0111 07:57:26.274255  312970 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:57:26.276559  312970 default_sa.go:45] found service account: "default"
	I0111 07:57:26.276580  312970 default_sa.go:55] duration metric: took 2.317108ms for default service account to be created ...
	I0111 07:57:26.276589  312970 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:57:26.279046  312970 system_pods.go:86] 8 kube-system pods found
	I0111 07:57:26.279071  312970 system_pods.go:89] "coredns-7d764666f9-gw9b6" [fab33c8a-a60a-4148-9d33-607b857a3867] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:57:26.279079  312970 system_pods.go:89] "etcd-no-preload-875594" [b5c2e76d-30c8-4b1f-9d33-bf17408c89d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:57:26.279084  312970 system_pods.go:89] "kindnet-p7w57" [f7547d3e-e98d-42af-a0cc-557386ef7efa] Running
	I0111 07:57:26.279090  312970 system_pods.go:89] "kube-apiserver-no-preload-875594" [231873ff-977b-4cce-9b1f-5b54c160ed65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:26.279104  312970 system_pods.go:89] "kube-controller-manager-no-preload-875594" [b9c4c8b6-049a-4281-9e88-8e216de6745e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:57:26.279110  312970 system_pods.go:89] "kube-proxy-49b4x" [c9fb5a8d-61b1-4674-aafc-a24c6c468eee] Running
	I0111 07:57:26.279127  312970 system_pods.go:89] "kube-scheduler-no-preload-875594" [6e6d1393-a91c-4c18-b5aa-47716fc63a9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:57:26.279135  312970 system_pods.go:89] "storage-provisioner" [5e5ad34f-7c3c-4f85-8280-d79b3586001e] Running
	I0111 07:57:26.279142  312970 system_pods.go:126] duration metric: took 2.548503ms to wait for k8s-apps to be running ...
	I0111 07:57:26.279148  312970 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:57:26.279196  312970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:57:26.294311  312970 system_svc.go:56] duration metric: took 15.153038ms WaitForService to wait for kubelet
	I0111 07:57:26.294340  312970 kubeadm.go:587] duration metric: took 3.501274051s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:57:26.294361  312970 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:57:26.297178  312970 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:57:26.297202  312970 node_conditions.go:123] node cpu capacity is 8
	I0111 07:57:26.297238  312970 node_conditions.go:105] duration metric: took 2.871117ms to run NodePressure ...
	I0111 07:57:26.297251  312970 start.go:242] waiting for startup goroutines ...
	I0111 07:57:26.297260  312970 start.go:247] waiting for cluster config update ...
	I0111 07:57:26.297269  312970 start.go:256] writing updated cluster config ...
	I0111 07:57:26.297517  312970 ssh_runner.go:195] Run: rm -f paused
	I0111 07:57:26.301885  312970 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:57:26.305631  312970 pod_ready.go:83] waiting for pod "coredns-7d764666f9-gw9b6" in "kube-system" namespace to be "Ready" or be gone ...
	W0111 07:57:28.311509  312970 pod_ready.go:104] pod "coredns-7d764666f9-gw9b6" is not "Ready", error: <nil>
	W0111 07:57:26.746879  306327 node_ready.go:57] node "default-k8s-diff-port-585688" has "Ready":"False" status (will retry)
	W0111 07:57:29.247400  306327 node_ready.go:57] node "default-k8s-diff-port-585688" has "Ready":"False" status (will retry)
	W0111 07:57:27.569924  310739 pod_ready.go:104] pod "coredns-5dd5756b68-797jn" is not "Ready", error: <nil>
	W0111 07:57:29.570899  310739 pod_ready.go:104] pod "coredns-5dd5756b68-797jn" is not "Ready", error: <nil>
	W0111 07:57:32.071259  310739 pod_ready.go:104] pod "coredns-5dd5756b68-797jn" is not "Ready", error: <nil>
	I0111 07:57:30.139217  314330 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 07:57:30.144842  314330 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0111 07:57:30.146006  314330 api_server.go:141] control plane version: v1.35.0
	I0111 07:57:30.146036  314330 api_server.go:131] duration metric: took 507.181074ms to wait for apiserver health ...
	I0111 07:57:30.146047  314330 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:57:30.149776  314330 system_pods.go:59] 8 kube-system pods found
	I0111 07:57:30.149858  314330 system_pods.go:61] "coredns-7d764666f9-2ffx6" [2688eeb7-8bc5-4f15-b509-5cbef9221315] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:57:30.149873  314330 system_pods.go:61] "etcd-embed-certs-285966" [9899ee31-98db-4fbb-b113-9d419e91352b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:57:30.149881  314330 system_pods.go:61] "kindnet-jqj7z" [501d0d18-f10c-475b-a6e7-fe2c701a98f4] Running
	I0111 07:57:30.149896  314330 system_pods.go:61] "kube-apiserver-embed-certs-285966" [daa95ccd-dd84-4c41-aff3-1f25fee32f3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:30.149903  314330 system_pods.go:61] "kube-controller-manager-embed-certs-285966" [cb996cff-c3b9-4656-8d67-822e68603ea1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:57:30.149909  314330 system_pods.go:61] "kube-proxy-2slrz" [63080ebb-cfaf-431d-a8db-d5c25bf7e852] Running
	I0111 07:57:30.149917  314330 system_pods.go:61] "kube-scheduler-embed-certs-285966" [cd55227a-af2c-4760-9c3f-414c78c535e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:57:30.149922  314330 system_pods.go:61] "storage-provisioner" [8203c244-cf17-45c3-829c-b15dd46689b6] Running
	I0111 07:57:30.149931  314330 system_pods.go:74] duration metric: took 3.875937ms to wait for pod list to return data ...
	I0111 07:57:30.149939  314330 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:57:30.152639  314330 default_sa.go:45] found service account: "default"
	I0111 07:57:30.152659  314330 default_sa.go:55] duration metric: took 2.713769ms for default service account to be created ...
	I0111 07:57:30.152669  314330 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:57:30.155618  314330 system_pods.go:86] 8 kube-system pods found
	I0111 07:57:30.155653  314330 system_pods.go:89] "coredns-7d764666f9-2ffx6" [2688eeb7-8bc5-4f15-b509-5cbef9221315] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:57:30.155666  314330 system_pods.go:89] "etcd-embed-certs-285966" [9899ee31-98db-4fbb-b113-9d419e91352b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:57:30.155684  314330 system_pods.go:89] "kindnet-jqj7z" [501d0d18-f10c-475b-a6e7-fe2c701a98f4] Running
	I0111 07:57:30.155695  314330 system_pods.go:89] "kube-apiserver-embed-certs-285966" [daa95ccd-dd84-4c41-aff3-1f25fee32f3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:30.155707  314330 system_pods.go:89] "kube-controller-manager-embed-certs-285966" [cb996cff-c3b9-4656-8d67-822e68603ea1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:57:30.155719  314330 system_pods.go:89] "kube-proxy-2slrz" [63080ebb-cfaf-431d-a8db-d5c25bf7e852] Running
	I0111 07:57:30.155733  314330 system_pods.go:89] "kube-scheduler-embed-certs-285966" [cd55227a-af2c-4760-9c3f-414c78c535e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:57:30.155741  314330 system_pods.go:89] "storage-provisioner" [8203c244-cf17-45c3-829c-b15dd46689b6] Running
	I0111 07:57:30.155752  314330 system_pods.go:126] duration metric: took 3.076321ms to wait for k8s-apps to be running ...
	I0111 07:57:30.155766  314330 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:57:30.155833  314330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:57:30.172587  314330 system_svc.go:56] duration metric: took 16.812829ms WaitForService to wait for kubelet
	I0111 07:57:30.172623  314330 kubeadm.go:587] duration metric: took 2.569245946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:57:30.172647  314330 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:57:30.176022  314330 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:57:30.176055  314330 node_conditions.go:123] node cpu capacity is 8
	I0111 07:57:30.176076  314330 node_conditions.go:105] duration metric: took 3.422854ms to run NodePressure ...
	I0111 07:57:30.176092  314330 start.go:242] waiting for startup goroutines ...
	I0111 07:57:30.176129  314330 start.go:247] waiting for cluster config update ...
	I0111 07:57:30.176154  314330 start.go:256] writing updated cluster config ...
	I0111 07:57:30.176555  314330 ssh_runner.go:195] Run: rm -f paused
	I0111 07:57:30.181904  314330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:57:30.186613  314330 pod_ready.go:83] waiting for pod "coredns-7d764666f9-2ffx6" in "kube-system" namespace to be "Ready" or be gone ...
	W0111 07:57:32.196472  314330 pod_ready.go:104] pod "coredns-7d764666f9-2ffx6" is not "Ready", error: <nil>
	W0111 07:57:34.196993  314330 pod_ready.go:104] pod "coredns-7d764666f9-2ffx6" is not "Ready", error: <nil>
	W0111 07:57:30.312692  312970 pod_ready.go:104] pod "coredns-7d764666f9-gw9b6" is not "Ready", error: <nil>
	W0111 07:57:32.315069  312970 pod_ready.go:104] pod "coredns-7d764666f9-gw9b6" is not "Ready", error: <nil>
	W0111 07:57:34.315541  312970 pod_ready.go:104] pod "coredns-7d764666f9-gw9b6" is not "Ready", error: <nil>
	W0111 07:57:31.745986  306327 node_ready.go:57] node "default-k8s-diff-port-585688" has "Ready":"False" status (will retry)
	I0111 07:57:32.746984  306327 node_ready.go:49] node "default-k8s-diff-port-585688" is "Ready"
	I0111 07:57:32.747030  306327 node_ready.go:38] duration metric: took 12.504413149s for node "default-k8s-diff-port-585688" to be "Ready" ...
	I0111 07:57:32.747056  306327 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:57:32.747114  306327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:57:32.767099  306327 api_server.go:72] duration metric: took 12.871560113s to wait for apiserver process to appear ...
	I0111 07:57:32.767129  306327 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:57:32.767153  306327 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0111 07:57:32.774911  306327 api_server.go:325] https://192.168.94.2:8444/healthz returned 200:
	ok
	I0111 07:57:32.776707  306327 api_server.go:141] control plane version: v1.35.0
	I0111 07:57:32.776826  306327 api_server.go:131] duration metric: took 9.598172ms to wait for apiserver health ...
	I0111 07:57:32.776844  306327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:57:32.782819  306327 system_pods.go:59] 8 kube-system pods found
	I0111 07:57:32.782862  306327 system_pods.go:61] "coredns-7d764666f9-9n4hp" [9e11784a-1ace-44c0-8f9e-656ed390ed43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:57:32.782871  306327 system_pods.go:61] "etcd-default-k8s-diff-port-585688" [8a1723a7-4506-4cbe-b1b5-c6f9df32979c] Running
	I0111 07:57:32.782879  306327 system_pods.go:61] "kindnet-lz5hh" [035a0293-8ca4-4f7c-8cc7-33f5f35eee60] Running
	I0111 07:57:32.782895  306327 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585688" [2dd2d325-0f00-43c4-b8c3-40a121ad8236] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:32.782901  306327 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585688" [f8d9761a-cb04-4f60-a85c-4e5d711937e5] Running
	I0111 07:57:32.782907  306327 system_pods.go:61] "kube-proxy-5z6rc" [4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6] Running
	I0111 07:57:32.782912  306327 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585688" [c22fa171-44f1-4321-b77a-43088630dfc2] Running
	I0111 07:57:32.782919  306327 system_pods.go:61] "storage-provisioner" [7f00c923-02d5-4873-968e-207c6eaa9567] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:57:32.782927  306327 system_pods.go:74] duration metric: took 6.074855ms to wait for pod list to return data ...
	I0111 07:57:32.782948  306327 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:57:32.786131  306327 default_sa.go:45] found service account: "default"
	I0111 07:57:32.786156  306327 default_sa.go:55] duration metric: took 3.201145ms for default service account to be created ...
	I0111 07:57:32.786166  306327 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:57:32.795461  306327 system_pods.go:86] 8 kube-system pods found
	I0111 07:57:32.795508  306327 system_pods.go:89] "coredns-7d764666f9-9n4hp" [9e11784a-1ace-44c0-8f9e-656ed390ed43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:57:32.795518  306327 system_pods.go:89] "etcd-default-k8s-diff-port-585688" [8a1723a7-4506-4cbe-b1b5-c6f9df32979c] Running
	I0111 07:57:32.795528  306327 system_pods.go:89] "kindnet-lz5hh" [035a0293-8ca4-4f7c-8cc7-33f5f35eee60] Running
	I0111 07:57:32.795551  306327 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585688" [2dd2d325-0f00-43c4-b8c3-40a121ad8236] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:32.795559  306327 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585688" [f8d9761a-cb04-4f60-a85c-4e5d711937e5] Running
	I0111 07:57:32.795566  306327 system_pods.go:89] "kube-proxy-5z6rc" [4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6] Running
	I0111 07:57:32.795577  306327 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585688" [c22fa171-44f1-4321-b77a-43088630dfc2] Running
	I0111 07:57:32.795599  306327 system_pods.go:89] "storage-provisioner" [7f00c923-02d5-4873-968e-207c6eaa9567] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:57:32.795655  306327 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0111 07:57:33.022417  306327 system_pods.go:86] 8 kube-system pods found
	I0111 07:57:33.022463  306327 system_pods.go:89] "coredns-7d764666f9-9n4hp" [9e11784a-1ace-44c0-8f9e-656ed390ed43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:57:33.022472  306327 system_pods.go:89] "etcd-default-k8s-diff-port-585688" [8a1723a7-4506-4cbe-b1b5-c6f9df32979c] Running
	I0111 07:57:33.022482  306327 system_pods.go:89] "kindnet-lz5hh" [035a0293-8ca4-4f7c-8cc7-33f5f35eee60] Running
	I0111 07:57:33.022492  306327 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585688" [2dd2d325-0f00-43c4-b8c3-40a121ad8236] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:33.022504  306327 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585688" [f8d9761a-cb04-4f60-a85c-4e5d711937e5] Running
	I0111 07:57:33.022510  306327 system_pods.go:89] "kube-proxy-5z6rc" [4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6] Running
	I0111 07:57:33.022517  306327 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585688" [c22fa171-44f1-4321-b77a-43088630dfc2] Running
	I0111 07:57:33.022522  306327 system_pods.go:89] "storage-provisioner" [7f00c923-02d5-4873-968e-207c6eaa9567] Running
	I0111 07:57:33.289138  306327 system_pods.go:86] 8 kube-system pods found
	I0111 07:57:33.289185  306327 system_pods.go:89] "coredns-7d764666f9-9n4hp" [9e11784a-1ace-44c0-8f9e-656ed390ed43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:57:33.289197  306327 system_pods.go:89] "etcd-default-k8s-diff-port-585688" [8a1723a7-4506-4cbe-b1b5-c6f9df32979c] Running
	I0111 07:57:33.289205  306327 system_pods.go:89] "kindnet-lz5hh" [035a0293-8ca4-4f7c-8cc7-33f5f35eee60] Running
	I0111 07:57:33.289216  306327 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585688" [2dd2d325-0f00-43c4-b8c3-40a121ad8236] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:33.289224  306327 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585688" [f8d9761a-cb04-4f60-a85c-4e5d711937e5] Running
	I0111 07:57:33.289230  306327 system_pods.go:89] "kube-proxy-5z6rc" [4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6] Running
	I0111 07:57:33.289236  306327 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585688" [c22fa171-44f1-4321-b77a-43088630dfc2] Running
	I0111 07:57:33.289240  306327 system_pods.go:89] "storage-provisioner" [7f00c923-02d5-4873-968e-207c6eaa9567] Running
	I0111 07:57:33.692956  306327 system_pods.go:86] 8 kube-system pods found
	I0111 07:57:33.692994  306327 system_pods.go:89] "coredns-7d764666f9-9n4hp" [9e11784a-1ace-44c0-8f9e-656ed390ed43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:57:33.693005  306327 system_pods.go:89] "etcd-default-k8s-diff-port-585688" [8a1723a7-4506-4cbe-b1b5-c6f9df32979c] Running
	I0111 07:57:33.693014  306327 system_pods.go:89] "kindnet-lz5hh" [035a0293-8ca4-4f7c-8cc7-33f5f35eee60] Running
	I0111 07:57:33.693023  306327 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585688" [2dd2d325-0f00-43c4-b8c3-40a121ad8236] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:33.693029  306327 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585688" [f8d9761a-cb04-4f60-a85c-4e5d711937e5] Running
	I0111 07:57:33.693035  306327 system_pods.go:89] "kube-proxy-5z6rc" [4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6] Running
	I0111 07:57:33.693040  306327 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585688" [c22fa171-44f1-4321-b77a-43088630dfc2] Running
	I0111 07:57:33.693045  306327 system_pods.go:89] "storage-provisioner" [7f00c923-02d5-4873-968e-207c6eaa9567] Running
	I0111 07:57:34.263398  306327 system_pods.go:86] 8 kube-system pods found
	I0111 07:57:34.263425  306327 system_pods.go:89] "coredns-7d764666f9-9n4hp" [9e11784a-1ace-44c0-8f9e-656ed390ed43] Running
	I0111 07:57:34.263430  306327 system_pods.go:89] "etcd-default-k8s-diff-port-585688" [8a1723a7-4506-4cbe-b1b5-c6f9df32979c] Running
	I0111 07:57:34.263435  306327 system_pods.go:89] "kindnet-lz5hh" [035a0293-8ca4-4f7c-8cc7-33f5f35eee60] Running
	I0111 07:57:34.263440  306327 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585688" [2dd2d325-0f00-43c4-b8c3-40a121ad8236] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:57:34.263445  306327 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585688" [f8d9761a-cb04-4f60-a85c-4e5d711937e5] Running
	I0111 07:57:34.263449  306327 system_pods.go:89] "kube-proxy-5z6rc" [4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6] Running
	I0111 07:57:34.263452  306327 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585688" [c22fa171-44f1-4321-b77a-43088630dfc2] Running
	I0111 07:57:34.263456  306327 system_pods.go:89] "storage-provisioner" [7f00c923-02d5-4873-968e-207c6eaa9567] Running
	I0111 07:57:34.263462  306327 system_pods.go:126] duration metric: took 1.477290816s to wait for k8s-apps to be running ...
	I0111 07:57:34.263469  306327 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:57:34.263512  306327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:57:34.280452  306327 system_svc.go:56] duration metric: took 16.967938ms WaitForService to wait for kubelet
	I0111 07:57:34.280502  306327 kubeadm.go:587] duration metric: took 14.384982233s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:57:34.280528  306327 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:57:34.283439  306327 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:57:34.283464  306327 node_conditions.go:123] node cpu capacity is 8
	I0111 07:57:34.283477  306327 node_conditions.go:105] duration metric: took 2.944893ms to run NodePressure ...
	I0111 07:57:34.283488  306327 start.go:242] waiting for startup goroutines ...
	I0111 07:57:34.283495  306327 start.go:247] waiting for cluster config update ...
	I0111 07:57:34.283504  306327 start.go:256] writing updated cluster config ...
	I0111 07:57:34.294456  306327 ssh_runner.go:195] Run: rm -f paused
	I0111 07:57:34.300780  306327 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:57:34.306312  306327 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9n4hp" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:34.312277  306327 pod_ready.go:94] pod "coredns-7d764666f9-9n4hp" is "Ready"
	I0111 07:57:34.312301  306327 pod_ready.go:86] duration metric: took 5.953523ms for pod "coredns-7d764666f9-9n4hp" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:34.314413  306327 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-585688" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:34.318438  306327 pod_ready.go:94] pod "etcd-default-k8s-diff-port-585688" is "Ready"
	I0111 07:57:34.318459  306327 pod_ready.go:86] duration metric: took 4.022214ms for pod "etcd-default-k8s-diff-port-585688" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:34.320284  306327 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-585688" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:35.327506  306327 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-585688" is "Ready"
	I0111 07:57:35.327536  306327 pod_ready.go:86] duration metric: took 1.007228929s for pod "kube-apiserver-default-k8s-diff-port-585688" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:35.330669  306327 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-585688" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:35.506274  306327 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-585688" is "Ready"
	I0111 07:57:35.506305  306327 pod_ready.go:86] duration metric: took 175.607922ms for pod "kube-controller-manager-default-k8s-diff-port-585688" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:35.705855  306327 pod_ready.go:83] waiting for pod "kube-proxy-5z6rc" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:36.106088  306327 pod_ready.go:94] pod "kube-proxy-5z6rc" is "Ready"
	I0111 07:57:36.106121  306327 pod_ready.go:86] duration metric: took 400.234452ms for pod "kube-proxy-5z6rc" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:36.305557  306327 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-585688" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:36.705319  306327 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-585688" is "Ready"
	I0111 07:57:36.705349  306327 pod_ready.go:86] duration metric: took 399.768015ms for pod "kube-scheduler-default-k8s-diff-port-585688" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 07:57:36.705363  306327 pod_ready.go:40] duration metric: took 2.404529599s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:57:36.769853  306327 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0111 07:57:36.772848  306327 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-585688" cluster and "default" namespace by default
	W0111 07:57:34.071358  310739 pod_ready.go:104] pod "coredns-5dd5756b68-797jn" is not "Ready", error: <nil>
	W0111 07:57:36.569134  310739 pod_ready.go:104] pod "coredns-5dd5756b68-797jn" is not "Ready", error: <nil>
	W0111 07:57:36.694405  314330 pod_ready.go:104] pod "coredns-7d764666f9-2ffx6" is not "Ready", error: <nil>
	W0111 07:57:39.193098  314330 pod_ready.go:104] pod "coredns-7d764666f9-2ffx6" is not "Ready", error: <nil>
	W0111 07:57:36.821496  312970 pod_ready.go:104] pod "coredns-7d764666f9-gw9b6" is not "Ready", error: <nil>
	W0111 07:57:39.312137  312970 pod_ready.go:104] pod "coredns-7d764666f9-gw9b6" is not "Ready", error: <nil>
	W0111 07:57:39.069062  310739 pod_ready.go:104] pod "coredns-5dd5756b68-797jn" is not "Ready", error: <nil>
	W0111 07:57:41.567525  310739 pod_ready.go:104] pod "coredns-5dd5756b68-797jn" is not "Ready", error: <nil>
	W0111 07:57:41.692227  314330 pod_ready.go:104] pod "coredns-7d764666f9-2ffx6" is not "Ready", error: <nil>
	W0111 07:57:43.692748  314330 pod_ready.go:104] pod "coredns-7d764666f9-2ffx6" is not "Ready", error: <nil>
	W0111 07:57:41.810618  312970 pod_ready.go:104] pod "coredns-7d764666f9-gw9b6" is not "Ready", error: <nil>
	W0111 07:57:43.811886  312970 pod_ready.go:104] pod "coredns-7d764666f9-gw9b6" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 11 07:57:32 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:32.92179211Z" level=info msg="Starting container: ffd43e3046dfe31c054bdf60137c9c403d12f4f7a9e36c54fdf8cb29ddeee5a0" id=92919ba5-041d-4786-8422-37786597a64a name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:57:32 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:32.924836839Z" level=info msg="Started container" PID=1897 containerID=ffd43e3046dfe31c054bdf60137c9c403d12f4f7a9e36c54fdf8cb29ddeee5a0 description=kube-system/coredns-7d764666f9-9n4hp/coredns id=92919ba5-041d-4786-8422-37786597a64a name=/runtime.v1.RuntimeService/StartContainer sandboxID=989cbbd0a1c09cc66b3151d1ffc228fbc2d5608da5d1f6b464c46d767cc45bd7
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.3422938Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a625b9a4-7b0d-4474-a36b-aa601177b211 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.342387897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.398415992Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4d64e8359843d9ea8a8965ac365a7e30988bd7f9484d2e501c813882696dd4eb UID:36c5c6ac-10cc-4c99-b758-401a24ce4d9a NetNS:/var/run/netns/00771b94-8e93-4714-9bd5-9db176c547c8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc001096470}] Aliases:map[]}"
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.398612133Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.421999697Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4d64e8359843d9ea8a8965ac365a7e30988bd7f9484d2e501c813882696dd4eb UID:36c5c6ac-10cc-4c99-b758-401a24ce4d9a NetNS:/var/run/netns/00771b94-8e93-4714-9bd5-9db176c547c8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc001096470}] Aliases:map[]}"
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.422180459Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.423373333Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.424734851Z" level=info msg="Ran pod sandbox 4d64e8359843d9ea8a8965ac365a7e30988bd7f9484d2e501c813882696dd4eb with infra container: default/busybox/POD" id=a625b9a4-7b0d-4474-a36b-aa601177b211 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.426335713Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cc7f91c1-a2f6-41d2-8174-714cfa8ae165 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.426485367Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cc7f91c1-a2f6-41d2-8174-714cfa8ae165 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.426589391Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cc7f91c1-a2f6-41d2-8174-714cfa8ae165 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.42762486Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8e805d35-d7ad-42c9-a0bf-46107d85c9e7 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:57:37 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:37.428111823Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 11 07:57:38 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:38.66830668Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=8e805d35-d7ad-42c9-a0bf-46107d85c9e7 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:57:38 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:38.669038388Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=302a40b9-25b3-41e5-9989-f866fad55ca9 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:38 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:38.671481801Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99c9558f-72e3-4d5a-b465-d4cc19e08e48 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:38 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:38.675986575Z" level=info msg="Creating container: default/busybox/busybox" id=30bed049-4a2e-4fb2-a003-8342bb38d9b3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:38 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:38.676127976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:38 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:38.682516783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:38 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:38.683083919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:38 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:38.720573259Z" level=info msg="Created container d3e476a7663fa5a9468f26d6f0ae1801fb26e8413b25c202540983cdd86e1ff5: default/busybox/busybox" id=30bed049-4a2e-4fb2-a003-8342bb38d9b3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:38 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:38.721608909Z" level=info msg="Starting container: d3e476a7663fa5a9468f26d6f0ae1801fb26e8413b25c202540983cdd86e1ff5" id=ad2cadde-1309-4e40-b9fd-0adbb10bd356 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:57:38 default-k8s-diff-port-585688 crio[779]: time="2026-01-11T07:57:38.724073494Z" level=info msg="Started container" PID=1972 containerID=d3e476a7663fa5a9468f26d6f0ae1801fb26e8413b25c202540983cdd86e1ff5 description=default/busybox/busybox id=ad2cadde-1309-4e40-b9fd-0adbb10bd356 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4d64e8359843d9ea8a8965ac365a7e30988bd7f9484d2e501c813882696dd4eb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	d3e476a7663fa       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   4d64e8359843d       busybox                                                default
	ffd43e3046dfe       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   989cbbd0a1c09       coredns-7d764666f9-9n4hp                               kube-system
	692237d02e4de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   794d4f8afdc6c       storage-provisioner                                    kube-system
	32d94512cf4a1       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   1f4a3b3875b34       kindnet-lz5hh                                          kube-system
	4db7115a820fb       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      25 seconds ago      Running             kube-proxy                0                   2e5d4f8fab880       kube-proxy-5z6rc                                       kube-system
	92e5ba3d195f8       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      36 seconds ago      Running             kube-apiserver            0                   ae3bcb200ff11       kube-apiserver-default-k8s-diff-port-585688            kube-system
	1898cce0bc3ae       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      36 seconds ago      Running             etcd                      0                   0ede86ae4776d       etcd-default-k8s-diff-port-585688                      kube-system
	51f6085d19154       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      36 seconds ago      Running             kube-scheduler            0                   1642b2ae7e8b5       kube-scheduler-default-k8s-diff-port-585688            kube-system
	214a39f525a05       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      36 seconds ago      Running             kube-controller-manager   0                   a4ba6fbf28e23       kube-controller-manager-default-k8s-diff-port-585688   kube-system
	
	
	==> coredns [ffd43e3046dfe31c054bdf60137c9c403d12f4f7a9e36c54fdf8cb29ddeee5a0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49086 - 24404 "HINFO IN 3921684831664535706.4613030377361700125. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037999773s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-585688
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-585688
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=default-k8s-diff-port-585688
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_57_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:57:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-585688
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:57:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:57:45 +0000   Sun, 11 Jan 2026 07:57:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:57:45 +0000   Sun, 11 Jan 2026 07:57:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:57:45 +0000   Sun, 11 Jan 2026 07:57:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:57:45 +0000   Sun, 11 Jan 2026 07:57:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-585688
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                a93c91b9-2231-4c51-8ee3-8e1fc4eb7cf8
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-9n4hp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-585688                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-lz5hh                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-585688             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-585688    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-5z6rc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-585688             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node default-k8s-diff-port-585688 event: Registered Node default-k8s-diff-port-585688 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [1898cce0bc3aecb2763ffb185d69222545535c1e84e921be10ba8bc2b5857824] <==
	{"level":"info","ts":"2026-01-11T07:57:10.365790Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T07:57:11.357766Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-11T07:57:11.357854Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-11T07:57:11.357950Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2026-01-11T07:57:11.357981Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:57:11.357998Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:11.358657Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:11.358683Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:57:11.358700Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:11.358707Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:11.359354Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:default-k8s-diff-port-585688 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:57:11.359360Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:11.359400Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:11.359427Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:57:11.359583Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:11.359610Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:11.360144Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:57:11.360283Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:57:11.360359Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:57:11.360403Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T07:57:11.360553Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T07:57:11.360860Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:57:11.361024Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:57:11.363721Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:57:11.363980Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 07:57:46 up 40 min,  0 user,  load average: 4.54, 3.64, 2.53
	Linux default-k8s-diff-port-585688 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [32d94512cf4a1a4d0e337e7942f578029245b30034ebe530c40da452ab253a19] <==
	I0111 07:57:22.113608       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:57:22.113908       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0111 07:57:22.114049       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:57:22.114066       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:57:22.114098       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:57:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:57:22.405986       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:57:22.406120       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:57:22.406158       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:57:22.406391       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:57:22.805867       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:57:22.805981       1 metrics.go:72] Registering metrics
	I0111 07:57:22.806233       1 controller.go:711] "Syncing nftables rules"
	I0111 07:57:32.315796       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:57:32.315885       1 main.go:301] handling current node
	I0111 07:57:42.317588       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:57:42.317629       1 main.go:301] handling current node
	
	
	==> kube-apiserver [92e5ba3d195f8f90cef182ac431089f5022116434b4a2c4511a27353d7ef809d] <==
	I0111 07:57:12.394249       1 policy_source.go:248] refreshing policies
	E0111 07:57:12.410667       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0111 07:57:12.459597       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:57:12.461154       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:57:12.461437       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0111 07:57:12.466524       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:57:12.561113       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:57:13.261942       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0111 07:57:13.265847       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0111 07:57:13.265874       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:57:13.761095       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:57:13.797282       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:57:13.868411       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 07:57:13.874757       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0111 07:57:13.875868       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:57:13.880279       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:57:14.283597       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:57:14.865333       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:57:14.896814       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 07:57:14.911219       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0111 07:57:19.889390       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:57:19.993880       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:57:20.003155       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:57:20.287402       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0111 07:57:45.096132       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:49130: use of closed network connection
	
	
	==> kube-controller-manager [214a39f525a05d2715531402e0f85ff6f4581f1905a55cddc3af401a887b86ee] <==
	I0111 07:57:19.091705       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.091893       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.092025       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.092315       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.092634       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.094170       1 range_allocator.go:177] "Sending events to api server"
	I0111 07:57:19.094378       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0111 07:57:19.094392       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:19.094399       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.094793       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.094955       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.095143       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.097370       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.097906       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.098124       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.098146       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.098467       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.102757       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.113688       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-585688" podCIDRs=["10.244.0.0/24"]
	I0111 07:57:19.116953       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:19.187897       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:19.187977       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:57:19.187990       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:57:19.218038       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:34.091588       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [4db7115a820fbe89215e508100cba027ae75bdd4d8cb9e6b929f63199f9e1bfc] <==
	I0111 07:57:20.749230       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:57:20.809560       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:20.910398       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:20.910447       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0111 07:57:20.910572       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:57:20.937463       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:57:20.937532       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:57:20.943288       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:57:20.944177       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:57:20.944206       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:20.946387       1 config.go:309] "Starting node config controller"
	I0111 07:57:20.946409       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:57:20.946621       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:57:20.946647       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:57:20.946648       1 config.go:200] "Starting service config controller"
	I0111 07:57:20.946662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:57:20.946678       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:57:20.946684       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:57:21.047290       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 07:57:21.047352       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:57:21.048423       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:57:21.048541       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [51f6085d191549ad1037f63226b84c4579aaf1c13ddb0205d4a02552ab1692be] <==
	E0111 07:57:12.319895       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 07:57:12.319927       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 07:57:12.320001       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 07:57:12.320005       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 07:57:12.320135       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 07:57:12.320238       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 07:57:12.320318       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 07:57:12.320669       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 07:57:12.320669       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 07:57:12.320742       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 07:57:12.321054       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 07:57:12.321067       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 07:57:12.321111       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 07:57:12.321232       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 07:57:12.321420       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 07:57:13.166931       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 07:57:13.204384       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 07:57:13.206486       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 07:57:13.227002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 07:57:13.241464       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 07:57:13.244345       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 07:57:13.281544       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 07:57:13.290699       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 07:57:13.728551       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I0111 07:57:16.812785       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:57:20 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:20.423096    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9djq\" (UniqueName: \"kubernetes.io/projected/035a0293-8ca4-4f7c-8cc7-33f5f35eee60-kube-api-access-d9djq\") pod \"kindnet-lz5hh\" (UID: \"035a0293-8ca4-4f7c-8cc7-33f5f35eee60\") " pod="kube-system/kindnet-lz5hh"
	Jan 11 07:57:20 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:20.423130    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6-kube-proxy\") pod \"kube-proxy-5z6rc\" (UID: \"4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6\") " pod="kube-system/kube-proxy-5z6rc"
	Jan 11 07:57:20 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:20.423151    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p2kx\" (UniqueName: \"kubernetes.io/projected/4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6-kube-api-access-5p2kx\") pod \"kube-proxy-5z6rc\" (UID: \"4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6\") " pod="kube-system/kube-proxy-5z6rc"
	Jan 11 07:57:20 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:20.423173    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/035a0293-8ca4-4f7c-8cc7-33f5f35eee60-xtables-lock\") pod \"kindnet-lz5hh\" (UID: \"035a0293-8ca4-4f7c-8cc7-33f5f35eee60\") " pod="kube-system/kindnet-lz5hh"
	Jan 11 07:57:20 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:20.877977    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-5z6rc" podStartSLOduration=0.877948718 podStartE2EDuration="877.948718ms" podCreationTimestamp="2026-01-11 07:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:57:20.877854226 +0000 UTC m=+6.215663947" watchObservedRunningTime="2026-01-11 07:57:20.877948718 +0000 UTC m=+6.215758452"
	Jan 11 07:57:22 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:22.890580    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-lz5hh" podStartSLOduration=1.630415787 podStartE2EDuration="2.890559953s" podCreationTimestamp="2026-01-11 07:57:20 +0000 UTC" firstStartedPulling="2026-01-11 07:57:20.638368496 +0000 UTC m=+5.976178212" lastFinishedPulling="2026-01-11 07:57:21.898512679 +0000 UTC m=+7.236322378" observedRunningTime="2026-01-11 07:57:22.888772271 +0000 UTC m=+8.226582014" watchObservedRunningTime="2026-01-11 07:57:22.890559953 +0000 UTC m=+8.228369674"
	Jan 11 07:57:24 default-k8s-diff-port-585688 kubelet[1311]: E0111 07:57:24.803321    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-585688" containerName="kube-apiserver"
	Jan 11 07:57:25 default-k8s-diff-port-585688 kubelet[1311]: E0111 07:57:25.251311    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-585688" containerName="etcd"
	Jan 11 07:57:25 default-k8s-diff-port-585688 kubelet[1311]: E0111 07:57:25.587282    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-585688" containerName="kube-scheduler"
	Jan 11 07:57:25 default-k8s-diff-port-585688 kubelet[1311]: E0111 07:57:25.880853    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-585688" containerName="etcd"
	Jan 11 07:57:25 default-k8s-diff-port-585688 kubelet[1311]: E0111 07:57:25.881241    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-585688" containerName="kube-scheduler"
	Jan 11 07:57:30 default-k8s-diff-port-585688 kubelet[1311]: E0111 07:57:30.079251    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-585688" containerName="kube-controller-manager"
	Jan 11 07:57:32 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:32.491086    1311 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 11 07:57:32 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:32.608939    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e11784a-1ace-44c0-8f9e-656ed390ed43-config-volume\") pod \"coredns-7d764666f9-9n4hp\" (UID: \"9e11784a-1ace-44c0-8f9e-656ed390ed43\") " pod="kube-system/coredns-7d764666f9-9n4hp"
	Jan 11 07:57:32 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:32.609012    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7f00c923-02d5-4873-968e-207c6eaa9567-tmp\") pod \"storage-provisioner\" (UID: \"7f00c923-02d5-4873-968e-207c6eaa9567\") " pod="kube-system/storage-provisioner"
	Jan 11 07:57:32 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:32.609071    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm2c9\" (UniqueName: \"kubernetes.io/projected/7f00c923-02d5-4873-968e-207c6eaa9567-kube-api-access-qm2c9\") pod \"storage-provisioner\" (UID: \"7f00c923-02d5-4873-968e-207c6eaa9567\") " pod="kube-system/storage-provisioner"
	Jan 11 07:57:32 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:32.609153    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxqq\" (UniqueName: \"kubernetes.io/projected/9e11784a-1ace-44c0-8f9e-656ed390ed43-kube-api-access-5fxqq\") pod \"coredns-7d764666f9-9n4hp\" (UID: \"9e11784a-1ace-44c0-8f9e-656ed390ed43\") " pod="kube-system/coredns-7d764666f9-9n4hp"
	Jan 11 07:57:33 default-k8s-diff-port-585688 kubelet[1311]: E0111 07:57:33.905648    1311 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9n4hp" containerName="coredns"
	Jan 11 07:57:33 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:33.927523    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.927499243 podStartE2EDuration="13.927499243s" podCreationTimestamp="2026-01-11 07:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:57:32.919403638 +0000 UTC m=+18.257213360" watchObservedRunningTime="2026-01-11 07:57:33.927499243 +0000 UTC m=+19.265308964"
	Jan 11 07:57:33 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:33.944335    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9n4hp" podStartSLOduration=13.944315981 podStartE2EDuration="13.944315981s" podCreationTimestamp="2026-01-11 07:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:57:33.928394479 +0000 UTC m=+19.266204200" watchObservedRunningTime="2026-01-11 07:57:33.944315981 +0000 UTC m=+19.282125705"
	Jan 11 07:57:34 default-k8s-diff-port-585688 kubelet[1311]: E0111 07:57:34.812141    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-585688" containerName="kube-apiserver"
	Jan 11 07:57:34 default-k8s-diff-port-585688 kubelet[1311]: E0111 07:57:34.908493    1311 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9n4hp" containerName="coredns"
	Jan 11 07:57:35 default-k8s-diff-port-585688 kubelet[1311]: E0111 07:57:35.910516    1311 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9n4hp" containerName="coredns"
	Jan 11 07:57:37 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:37.140030    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvl5h\" (UniqueName: \"kubernetes.io/projected/36c5c6ac-10cc-4c99-b758-401a24ce4d9a-kube-api-access-rvl5h\") pod \"busybox\" (UID: \"36c5c6ac-10cc-4c99-b758-401a24ce4d9a\") " pod="default/busybox"
	Jan 11 07:57:38 default-k8s-diff-port-585688 kubelet[1311]: I0111 07:57:38.934361    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.690877289 podStartE2EDuration="2.934341019s" podCreationTimestamp="2026-01-11 07:57:36 +0000 UTC" firstStartedPulling="2026-01-11 07:57:37.427087873 +0000 UTC m=+22.764897573" lastFinishedPulling="2026-01-11 07:57:38.670551585 +0000 UTC m=+24.008361303" observedRunningTime="2026-01-11 07:57:38.934128812 +0000 UTC m=+24.271938534" watchObservedRunningTime="2026-01-11 07:57:38.934341019 +0000 UTC m=+24.272150741"
	
	
	==> storage-provisioner [692237d02e4dedce620fd009ab218743d6c6351af24214d91b7f1c8a01cccf4e] <==
	I0111 07:57:32.901853       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:57:32.913293       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:57:32.913430       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 07:57:32.920638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:32.929260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:57:32.929475       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:57:32.929660       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-585688_eb9e05aa-ca2f-41cb-83c7-25fef20e3302!
	I0111 07:57:32.930677       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"06bdb24f-e9c9-497c-a246-c1be0eb5956d", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-585688_eb9e05aa-ca2f-41cb-83c7-25fef20e3302 became leader
	W0111 07:57:32.933272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:32.936778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:57:33.030456       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-585688_eb9e05aa-ca2f-41cb-83c7-25fef20e3302!
	W0111 07:57:34.941085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:34.946626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:36.950638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:36.958764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:38.963040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:38.967609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:40.971681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:40.976892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:43.000118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:43.018125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:45.021763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:45.027736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-585688 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-086314 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-086314 --alsologtostderr -v=1: exit status 80 (2.493777207s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-086314 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:58:06.762983  321804 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:58:06.763212  321804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:06.763221  321804 out.go:374] Setting ErrFile to fd 2...
	I0111 07:58:06.763225  321804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:06.763421  321804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:58:06.763683  321804 out.go:368] Setting JSON to false
	I0111 07:58:06.763700  321804 mustload.go:66] Loading cluster: old-k8s-version-086314
	I0111 07:58:06.764074  321804 config.go:182] Loaded profile config "old-k8s-version-086314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 07:58:06.764439  321804 cli_runner.go:164] Run: docker container inspect old-k8s-version-086314 --format={{.State.Status}}
	I0111 07:58:06.783411  321804 host.go:66] Checking if "old-k8s-version-086314" exists ...
	I0111 07:58:06.783669  321804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:06.839431  321804 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-11 07:58:06.829469006 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:06.840020  321804 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-086314 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 07:58:06.841962  321804 out.go:179] * Pausing node old-k8s-version-086314 ... 
	I0111 07:58:06.843433  321804 host.go:66] Checking if "old-k8s-version-086314" exists ...
	I0111 07:58:06.843694  321804 ssh_runner.go:195] Run: systemctl --version
	I0111 07:58:06.843729  321804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-086314
	I0111 07:58:06.862101  321804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/old-k8s-version-086314/id_rsa Username:docker}
	I0111 07:58:06.963685  321804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:06.976604  321804 pause.go:52] kubelet running: true
	I0111 07:58:06.976674  321804 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:58:07.137713  321804 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:58:07.137790  321804 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:58:07.207272  321804 cri.go:96] found id: "3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea"
	I0111 07:58:07.207297  321804 cri.go:96] found id: "0dd22536edc65b5899a77d165fd818b4fd7c80d7bb59506a4d00cbcd12bd09f8"
	I0111 07:58:07.207302  321804 cri.go:96] found id: "2a89cf140b40e92be6883546b9d6eaa87b04656fede8886d6f6498fd3fef6b08"
	I0111 07:58:07.207305  321804 cri.go:96] found id: "2887e5b7f844309154ee2ebabd60573461082436207c18a29bc160eef0337f27"
	I0111 07:58:07.207308  321804 cri.go:96] found id: "df31d3d09ca389436cb3a01ded5e81bc1d65c374669f941d29494f6983c8f2ec"
	I0111 07:58:07.207312  321804 cri.go:96] found id: "21d8f6523f41f21ffeda4f2bbe6d7a6d6e3b5777d00e6636a3e24a10e4d289b6"
	I0111 07:58:07.207314  321804 cri.go:96] found id: "0edb54dd4527597f168155b2e84150456758f58bfbdfc095e1eb8fb25625aba1"
	I0111 07:58:07.207317  321804 cri.go:96] found id: "d6b9eea8c7a54fe3616dbfe3782652d9def84a72df838c12acfb555c6b06365f"
	I0111 07:58:07.207319  321804 cri.go:96] found id: "9aea2caaf541351d25ce6ed8626deeaa0c2f10b82ca916b553d95e5d0f95e0b8"
	I0111 07:58:07.207332  321804 cri.go:96] found id: "e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35"
	I0111 07:58:07.207335  321804 cri.go:96] found id: "ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6"
	I0111 07:58:07.207337  321804 cri.go:96] found id: ""
	I0111 07:58:07.207374  321804 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:58:07.219549  321804 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:07Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:58:07.421069  321804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:07.434903  321804 pause.go:52] kubelet running: false
	I0111 07:58:07.434965  321804 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:58:07.581610  321804 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:58:07.581719  321804 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:58:07.655756  321804 cri.go:96] found id: "3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea"
	I0111 07:58:07.655781  321804 cri.go:96] found id: "0dd22536edc65b5899a77d165fd818b4fd7c80d7bb59506a4d00cbcd12bd09f8"
	I0111 07:58:07.655786  321804 cri.go:96] found id: "2a89cf140b40e92be6883546b9d6eaa87b04656fede8886d6f6498fd3fef6b08"
	I0111 07:58:07.655790  321804 cri.go:96] found id: "2887e5b7f844309154ee2ebabd60573461082436207c18a29bc160eef0337f27"
	I0111 07:58:07.655794  321804 cri.go:96] found id: "df31d3d09ca389436cb3a01ded5e81bc1d65c374669f941d29494f6983c8f2ec"
	I0111 07:58:07.655810  321804 cri.go:96] found id: "21d8f6523f41f21ffeda4f2bbe6d7a6d6e3b5777d00e6636a3e24a10e4d289b6"
	I0111 07:58:07.655815  321804 cri.go:96] found id: "0edb54dd4527597f168155b2e84150456758f58bfbdfc095e1eb8fb25625aba1"
	I0111 07:58:07.655819  321804 cri.go:96] found id: "d6b9eea8c7a54fe3616dbfe3782652d9def84a72df838c12acfb555c6b06365f"
	I0111 07:58:07.655824  321804 cri.go:96] found id: "9aea2caaf541351d25ce6ed8626deeaa0c2f10b82ca916b553d95e5d0f95e0b8"
	I0111 07:58:07.655843  321804 cri.go:96] found id: "e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35"
	I0111 07:58:07.655847  321804 cri.go:96] found id: "ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6"
	I0111 07:58:07.655851  321804 cri.go:96] found id: ""
	I0111 07:58:07.655908  321804 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:58:08.165499  321804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:08.190192  321804 pause.go:52] kubelet running: false
	I0111 07:58:08.190252  321804 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:58:08.343555  321804 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:58:08.343639  321804 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:58:08.419354  321804 cri.go:96] found id: "3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea"
	I0111 07:58:08.419379  321804 cri.go:96] found id: "0dd22536edc65b5899a77d165fd818b4fd7c80d7bb59506a4d00cbcd12bd09f8"
	I0111 07:58:08.419385  321804 cri.go:96] found id: "2a89cf140b40e92be6883546b9d6eaa87b04656fede8886d6f6498fd3fef6b08"
	I0111 07:58:08.419389  321804 cri.go:96] found id: "2887e5b7f844309154ee2ebabd60573461082436207c18a29bc160eef0337f27"
	I0111 07:58:08.419394  321804 cri.go:96] found id: "df31d3d09ca389436cb3a01ded5e81bc1d65c374669f941d29494f6983c8f2ec"
	I0111 07:58:08.419399  321804 cri.go:96] found id: "21d8f6523f41f21ffeda4f2bbe6d7a6d6e3b5777d00e6636a3e24a10e4d289b6"
	I0111 07:58:08.419404  321804 cri.go:96] found id: "0edb54dd4527597f168155b2e84150456758f58bfbdfc095e1eb8fb25625aba1"
	I0111 07:58:08.419409  321804 cri.go:96] found id: "d6b9eea8c7a54fe3616dbfe3782652d9def84a72df838c12acfb555c6b06365f"
	I0111 07:58:08.419414  321804 cri.go:96] found id: "9aea2caaf541351d25ce6ed8626deeaa0c2f10b82ca916b553d95e5d0f95e0b8"
	I0111 07:58:08.419430  321804 cri.go:96] found id: "e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35"
	I0111 07:58:08.419439  321804 cri.go:96] found id: "ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6"
	I0111 07:58:08.419443  321804 cri.go:96] found id: ""
	I0111 07:58:08.419485  321804 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:58:08.932499  321804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:08.945935  321804 pause.go:52] kubelet running: false
	I0111 07:58:08.945996  321804 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:58:09.106850  321804 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:58:09.106946  321804 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:58:09.179450  321804 cri.go:96] found id: "3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea"
	I0111 07:58:09.179470  321804 cri.go:96] found id: "0dd22536edc65b5899a77d165fd818b4fd7c80d7bb59506a4d00cbcd12bd09f8"
	I0111 07:58:09.179474  321804 cri.go:96] found id: "2a89cf140b40e92be6883546b9d6eaa87b04656fede8886d6f6498fd3fef6b08"
	I0111 07:58:09.179477  321804 cri.go:96] found id: "2887e5b7f844309154ee2ebabd60573461082436207c18a29bc160eef0337f27"
	I0111 07:58:09.179487  321804 cri.go:96] found id: "df31d3d09ca389436cb3a01ded5e81bc1d65c374669f941d29494f6983c8f2ec"
	I0111 07:58:09.179491  321804 cri.go:96] found id: "21d8f6523f41f21ffeda4f2bbe6d7a6d6e3b5777d00e6636a3e24a10e4d289b6"
	I0111 07:58:09.179494  321804 cri.go:96] found id: "0edb54dd4527597f168155b2e84150456758f58bfbdfc095e1eb8fb25625aba1"
	I0111 07:58:09.179497  321804 cri.go:96] found id: "d6b9eea8c7a54fe3616dbfe3782652d9def84a72df838c12acfb555c6b06365f"
	I0111 07:58:09.179499  321804 cri.go:96] found id: "9aea2caaf541351d25ce6ed8626deeaa0c2f10b82ca916b553d95e5d0f95e0b8"
	I0111 07:58:09.179506  321804 cri.go:96] found id: "e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35"
	I0111 07:58:09.179508  321804 cri.go:96] found id: "ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6"
	I0111 07:58:09.179512  321804 cri.go:96] found id: ""
	I0111 07:58:09.179555  321804 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:58:09.194410  321804 out.go:203] 
	W0111 07:58:09.195940  321804 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:58:09.195961  321804 out.go:285] * 
	* 
	W0111 07:58:09.197772  321804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:58:09.198970  321804 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-086314 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-086314
helpers_test.go:244: (dbg) docker inspect old-k8s-version-086314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e",
	        "Created": "2026-01-11T07:55:55.943439087Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 311002,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:57:07.577254184Z",
	            "FinishedAt": "2026-01-11T07:57:06.675212797Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/hosts",
	        "LogPath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e-json.log",
	        "Name": "/old-k8s-version-086314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-086314:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-086314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e",
	                "LowerDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-086314",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-086314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-086314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-086314",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-086314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "113864cbe9cbc0b20122fd36a7a1f529fa0e1e7d342a2f5c26ee4d3326d9afba",
	            "SandboxKey": "/var/run/docker/netns/113864cbe9cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-086314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9bb753c53c10f85ef7bfc0f02abe3d37237f5f78e229ae0d2842e242dc126077",
	                    "EndpointID": "427ac4491b9181dcfe1fabf2af5b1f95ed1347cb6f95ab09f6c8ab5e4181a3c2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "2e:07:6b:53:65:40",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-086314",
	                        "3dea01a8b9cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-086314 -n old-k8s-version-086314
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-086314 -n old-k8s-version-086314: exit status 2 (357.259072ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-086314 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-086314 logs -n 25: (1.193365644s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-181803 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo containerd config dump                                                                                                                                                                                                  │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo crio config                                                                                                                                                                                                             │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p bridge-181803                                                                                                                                                                                                                              │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p disable-driver-mounts-768607                                                                                                                                                                                                               │ disable-driver-mounts-768607 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ stop    │ -p no-preload-875594 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-285966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p embed-certs-285966 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-086314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p old-k8s-version-086314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p no-preload-875594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p no-preload-875594 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-285966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-585688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-585688 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-585688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ old-k8s-version-086314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p old-k8s-version-086314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:58:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:58:03.750293  321160 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:58:03.750380  321160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:03.750385  321160 out.go:374] Setting ErrFile to fd 2...
	I0111 07:58:03.750389  321160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:03.750603  321160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:58:03.751045  321160 out.go:368] Setting JSON to false
	I0111 07:58:03.752302  321160 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2432,"bootTime":1768115852,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:58:03.752356  321160 start.go:143] virtualization: kvm guest
	I0111 07:58:03.754333  321160 out.go:179] * [default-k8s-diff-port-585688] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:58:03.755670  321160 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:58:03.755682  321160 notify.go:221] Checking for updates...
	I0111 07:58:03.760338  321160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:58:03.761494  321160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:03.762557  321160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:58:03.763473  321160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:58:03.764542  321160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:58:03.765996  321160 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:03.766475  321160 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:58:03.790517  321160 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:58:03.790609  321160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:03.850389  321160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-11 07:58:03.839194751 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:03.850495  321160 docker.go:319] overlay module found
	I0111 07:58:03.852223  321160 out.go:179] * Using the docker driver based on existing profile
	I0111 07:58:03.853346  321160 start.go:309] selected driver: docker
	I0111 07:58:03.853363  321160 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:58:03.853470  321160 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:58:03.854110  321160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:03.909243  321160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-11 07:58:03.898847213 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:03.909498  321160 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:58:03.909520  321160 cni.go:84] Creating CNI manager for ""
	I0111 07:58:03.909574  321160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:58:03.909605  321160 start.go:353] cluster config:
	{Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:58:03.912090  321160 out.go:179] * Starting "default-k8s-diff-port-585688" primary control-plane node in "default-k8s-diff-port-585688" cluster
	I0111 07:58:03.913198  321160 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:58:03.914457  321160 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:58:03.915521  321160 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:58:03.915557  321160 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:58:03.915566  321160 cache.go:65] Caching tarball of preloaded images
	I0111 07:58:03.915650  321160 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:58:03.915647  321160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:58:03.915662  321160 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:58:03.915923  321160 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/config.json ...
	I0111 07:58:03.937090  321160 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:58:03.937113  321160 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:58:03.937135  321160 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:58:03.937167  321160 start.go:360] acquireMachinesLock for default-k8s-diff-port-585688: {Name:mk2e6a9de762d1bbe8860b89830bc1ce9534f022 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:58:03.937235  321160 start.go:364] duration metric: took 38.527µs to acquireMachinesLock for "default-k8s-diff-port-585688"
	I0111 07:58:03.937257  321160 start.go:96] Skipping create...Using existing machine configuration
	I0111 07:58:03.937263  321160 fix.go:54] fixHost starting: 
	I0111 07:58:03.937492  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:03.956257  321160 fix.go:112] recreateIfNeeded on default-k8s-diff-port-585688: state=Stopped err=<nil>
	W0111 07:58:03.956296  321160 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 07:58:03.958165  321160 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-585688" ...
	I0111 07:58:03.958234  321160 cli_runner.go:164] Run: docker start default-k8s-diff-port-585688
	I0111 07:58:04.218023  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:04.237128  321160 kic.go:430] container "default-k8s-diff-port-585688" state is running.
	I0111 07:58:04.237486  321160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-585688
	I0111 07:58:04.256566  321160 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/config.json ...
	I0111 07:58:04.256863  321160 machine.go:94] provisionDockerMachine start ...
	I0111 07:58:04.256948  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:04.276200  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:04.276485  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:04.276505  321160 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 07:58:04.277203  321160 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43966->127.0.0.1:33123: read: connection reset by peer
	I0111 07:58:07.423204  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585688
	
	I0111 07:58:07.423233  321160 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-585688"
	I0111 07:58:07.423302  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:07.443350  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:07.443550  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:07.443562  321160 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-585688 && echo "default-k8s-diff-port-585688" | sudo tee /etc/hostname
	I0111 07:58:07.600640  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585688
	
	I0111 07:58:07.600736  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:07.621646  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:07.621975  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:07.622014  321160 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-585688' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-585688/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-585688' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 07:58:07.770516  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 07:58:07.770544  321160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-4689/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-4689/.minikube}
	I0111 07:58:07.770594  321160 ubuntu.go:190] setting up certificates
	I0111 07:58:07.770613  321160 provision.go:84] configureAuth start
	I0111 07:58:07.770680  321160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-585688
	I0111 07:58:07.788321  321160 provision.go:143] copyHostCerts
	I0111 07:58:07.788406  321160 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem, removing ...
	I0111 07:58:07.788419  321160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem
	I0111 07:58:07.788492  321160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem (1082 bytes)
	I0111 07:58:07.788602  321160 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem, removing ...
	I0111 07:58:07.788611  321160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem
	I0111 07:58:07.788643  321160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem (1123 bytes)
	I0111 07:58:07.788720  321160 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem, removing ...
	I0111 07:58:07.788727  321160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem
	I0111 07:58:07.788752  321160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem (1675 bytes)
	I0111 07:58:07.788836  321160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-585688 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-585688 localhost minikube]
	I0111 07:58:07.847825  321160 provision.go:177] copyRemoteCerts
	I0111 07:58:07.847881  321160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 07:58:07.847920  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:07.866299  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:07.968665  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0111 07:58:07.986297  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 07:58:08.003675  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 07:58:08.020217  321160 provision.go:87] duration metric: took 249.589615ms to configureAuth
	I0111 07:58:08.020247  321160 ubuntu.go:206] setting minikube options for container-runtime
	I0111 07:58:08.020421  321160 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:08.020520  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.038641  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:08.038886  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:08.038906  321160 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 07:58:08.392034  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 07:58:08.392061  321160 machine.go:97] duration metric: took 4.135180228s to provisionDockerMachine
	I0111 07:58:08.392076  321160 start.go:293] postStartSetup for "default-k8s-diff-port-585688" (driver="docker")
	I0111 07:58:08.392091  321160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 07:58:08.392193  321160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 07:58:08.392248  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.413393  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:08.516961  321160 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 07:58:08.520383  321160 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 07:58:08.520406  321160 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 07:58:08.520417  321160 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/addons for local assets ...
	I0111 07:58:08.520476  321160 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/files for local assets ...
	I0111 07:58:08.520571  321160 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem -> 82382.pem in /etc/ssl/certs
	I0111 07:58:08.520693  321160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 07:58:08.528116  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:58:08.545008  321160 start.go:296] duration metric: took 152.917379ms for postStartSetup
	I0111 07:58:08.545089  321160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:58:08.545135  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.565305  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:08.663695  321160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 07:58:08.668649  321160 fix.go:56] duration metric: took 4.731380965s for fixHost
	I0111 07:58:08.668678  321160 start.go:83] releasing machines lock for "default-k8s-diff-port-585688", held for 4.731430414s
	I0111 07:58:08.668736  321160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-585688
	I0111 07:58:08.686470  321160 ssh_runner.go:195] Run: cat /version.json
	I0111 07:58:08.686520  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.686548  321160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 07:58:08.686604  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.705659  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:08.705850  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Jan 11 07:57:38 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:38.454308414Z" level=info msg="Created container ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jfjr/kubernetes-dashboard" id=19c5da80-5439-46d4-aeef-f9a13f394836 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:38 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:38.455369119Z" level=info msg="Starting container: ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6" id=fa76943a-adac-4cb8-8778-3a11ae399640 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:57:38 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:38.458218352Z" level=info msg="Started container" PID=1752 containerID=ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jfjr/kubernetes-dashboard id=fa76943a-adac-4cb8-8778-3a11ae399640 name=/runtime.v1.RuntimeService/StartContainer sandboxID=edf3c8cfc9e3ab3b6a240161e083aab9af76e9e28e9febf11296c37f37cc01be
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.132883512Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a66e3b9f-0c54-4cca-b9a5-c401ce433773 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.13386867Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=06a317df-b821-4291-bacd-ddfa3ce88bac name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.134930212Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5376f286-d2e1-445d-9d75-eb35e11140a4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.135089769Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.139873555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.140083005Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/99a6de00a7d032804068f0037f783b59ff11b2b85c0d3ca4a86cdfcc6563fd25/merged/etc/passwd: no such file or directory"
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.140116904Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/99a6de00a7d032804068f0037f783b59ff11b2b85c0d3ca4a86cdfcc6563fd25/merged/etc/group: no such file or directory"
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.140410516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.166543262Z" level=info msg="Created container 3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea: kube-system/storage-provisioner/storage-provisioner" id=5376f286-d2e1-445d-9d75-eb35e11140a4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.167116463Z" level=info msg="Starting container: 3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea" id=be1edc3b-4a3b-40a5-93e0-a46eab0c775d name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.168708716Z" level=info msg="Started container" PID=1774 containerID=3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea description=kube-system/storage-provisioner/storage-provisioner id=be1edc3b-4a3b-40a5-93e0-a46eab0c775d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ddc6863a51a676e59e2535422ad0591d88d6c5a88adbe24fcdf612dfb7f2fb6
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.002757534Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=47a12997-1eed-48c2-ab98-f61eb4897e38 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.003886334Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e7b15f6d-1709-4ed3-8470-07ca9da957ac name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.005087621Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl/dashboard-metrics-scraper" id=b4f2de02-cc9f-4fcc-8fd5-18571e05619f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.005236529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.010574353Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.011245812Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.043862301Z" level=info msg="Created container e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl/dashboard-metrics-scraper" id=b4f2de02-cc9f-4fcc-8fd5-18571e05619f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.044460754Z" level=info msg="Starting container: e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35" id=c6639d7d-c770-4f6f-aac6-f55657351ebf name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.046120885Z" level=info msg="Started container" PID=1813 containerID=e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl/dashboard-metrics-scraper id=c6639d7d-c770-4f6f-aac6-f55657351ebf name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f39934227c8f036237197340e447d04cffcef8ca1b56196fad990df4f9b7fbc
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.155253201Z" level=info msg="Removing container: d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8" id=3d7dbcd1-0cb3-43a1-bfe4-631983424a72 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.165103403Z" level=info msg="Removed container d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl/dashboard-metrics-scraper" id=3d7dbcd1-0cb3-43a1-bfe4-631983424a72 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e7d5e624df5a6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   0f39934227c8f       dashboard-metrics-scraper-5f989dc9cf-974nl       kubernetes-dashboard
	3873b8c331905       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   8ddc6863a51a6       storage-provisioner                              kube-system
	ffd86ac4a50fa       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   edf3c8cfc9e3a       kubernetes-dashboard-8694d4445c-5jfjr            kubernetes-dashboard
	1a68a697fa686       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   897ecbf6e6be4       busybox                                          default
	0dd22536edc65       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   c2cc65fa850be       coredns-5dd5756b68-797jn                         kube-system
	2a89cf140b40e       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   c4af81c0c2faf       kube-proxy-dl9xq                                 kube-system
	2887e5b7f8443       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   8ddc6863a51a6       storage-provisioner                              kube-system
	df31d3d09ca38       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           51 seconds ago      Running             kindnet-cni                 0                   7b05ee17dd9b3       kindnet-9sfd9                                    kube-system
	21d8f6523f41f       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           55 seconds ago      Running             kube-scheduler              0                   b61a9599fdd39       kube-scheduler-old-k8s-version-086314            kube-system
	0edb54dd45275       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           55 seconds ago      Running             etcd                        0                   a17a9da7f99c8       etcd-old-k8s-version-086314                      kube-system
	d6b9eea8c7a54       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           55 seconds ago      Running             kube-controller-manager     0                   c50924beadbe0       kube-controller-manager-old-k8s-version-086314   kube-system
	9aea2caaf5413       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           55 seconds ago      Running             kube-apiserver              0                   21d898e699f54       kube-apiserver-old-k8s-version-086314            kube-system
	
	
	==> coredns [0dd22536edc65b5899a77d165fd818b4fd7c80d7bb59506a4d00cbcd12bd09f8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32973 - 63485 "HINFO IN 971775512634353947.4819203527870664801. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.041152422s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-086314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-086314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=old-k8s-version-086314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_56_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:56:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-086314
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:57:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:57:48 +0000   Sun, 11 Jan 2026 07:56:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:57:48 +0000   Sun, 11 Jan 2026 07:56:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:57:48 +0000   Sun, 11 Jan 2026 07:56:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:57:48 +0000   Sun, 11 Jan 2026 07:56:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-086314
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                d90f486e-fdcb-4fb9-aaa3-f827ad3b5d21
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-797jn                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-old-k8s-version-086314                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-9sfd9                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-086314             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-086314    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-dl9xq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-086314             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-974nl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-5jfjr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-086314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-086314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-086314 event: Registered Node old-k8s-version-086314 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-086314 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x9 over 57s)    kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)    kubelet          Node old-k8s-version-086314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x7 over 57s)    kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-086314 event: Registered Node old-k8s-version-086314 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [0edb54dd4527597f168155b2e84150456758f58bfbdfc095e1eb8fb25625aba1] <==
	{"level":"info","ts":"2026-01-11T07:57:14.619691Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T07:57:14.619702Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T07:57:14.620552Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-11T07:57:14.620869Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T07:57:14.620976Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T07:57:14.621219Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:14.621246Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:14.62532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2026-01-11T07:57:14.626554Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2026-01-11T07:57:14.626866Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T07:57:14.626922Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T07:57:16.102435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:16.102487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:16.10252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:16.102533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:16.102538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:16.102546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:16.102553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:16.103507Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-086314 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:57:16.103568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:16.103591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:16.103893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:16.103981Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:16.104832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:57:16.105045Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 07:58:10 up 40 min,  0 user,  load average: 3.49, 3.46, 2.50
	Linux old-k8s-version-086314 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [df31d3d09ca389436cb3a01ded5e81bc1d65c374669f941d29494f6983c8f2ec] <==
	I0111 07:57:18.508287       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:57:18.508554       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0111 07:57:18.508750       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:57:18.508775       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:57:18.508785       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:57:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:57:18.808189       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:57:18.808265       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:57:18.808280       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:57:18.808430       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:57:19.108362       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:57:19.108396       1 metrics.go:72] Registering metrics
	I0111 07:57:19.108587       1 controller.go:711] "Syncing nftables rules"
	I0111 07:57:28.808040       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:57:28.808231       1 main.go:301] handling current node
	I0111 07:57:38.808273       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:57:38.808319       1 main.go:301] handling current node
	I0111 07:57:48.809058       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:57:48.809151       1 main.go:301] handling current node
	I0111 07:57:58.809012       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:57:58.809050       1 main.go:301] handling current node
	I0111 07:58:08.813868       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:58:08.813902       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9aea2caaf541351d25ce6ed8626deeaa0c2f10b82ca916b553d95e5d0f95e0b8] <==
	I0111 07:57:17.021649       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0111 07:57:17.085890       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:57:17.116857       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0111 07:57:17.116882       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0111 07:57:17.116904       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0111 07:57:17.116925       1 aggregator.go:166] initial CRD sync complete...
	I0111 07:57:17.116932       1 autoregister_controller.go:141] Starting autoregister controller
	I0111 07:57:17.116937       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 07:57:17.116942       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:57:17.117202       1 shared_informer.go:318] Caches are synced for configmaps
	I0111 07:57:17.117235       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 07:57:17.117307       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0111 07:57:17.117353       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0111 07:57:17.157502       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0111 07:57:17.906636       1 controller.go:624] quota admission added evaluator for: namespaces
	I0111 07:57:17.936927       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0111 07:57:17.953316       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:57:17.960589       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:57:17.967045       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0111 07:57:17.999908       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.80.176"}
	I0111 07:57:18.013108       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.217.137"}
	I0111 07:57:18.023257       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0111 07:57:29.905435       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0111 07:57:29.907839       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:57:29.915252       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d6b9eea8c7a54fe3616dbfe3782652d9def84a72df838c12acfb555c6b06365f] <==
	I0111 07:57:29.964436       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0111 07:57:29.981822       1 shared_informer.go:318] Caches are synced for disruption
	I0111 07:57:29.985342       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0111 07:57:29.990065       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="30.556551ms"
	I0111 07:57:29.990214       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0111 07:57:29.990235       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0111 07:57:29.990781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="451.309µs"
	I0111 07:57:29.990917       1 shared_informer.go:318] Caches are synced for stateful set
	I0111 07:57:30.003817       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.538504ms"
	I0111 07:57:30.004171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="148.365µs"
	I0111 07:57:30.024281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="112.095µs"
	I0111 07:57:30.061727       1 shared_informer.go:318] Caches are synced for resource quota
	I0111 07:57:30.089788       1 shared_informer.go:318] Caches are synced for resource quota
	I0111 07:57:30.417407       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 07:57:30.437584       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 07:57:30.437627       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0111 07:57:35.103734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.479µs"
	I0111 07:57:36.109990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="238.375µs"
	I0111 07:57:37.117501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="117.407µs"
	I0111 07:57:39.128181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.43427ms"
	I0111 07:57:39.128746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="106.32µs"
	I0111 07:57:53.219753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.497848ms"
	I0111 07:57:53.219926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="127.334µs"
	I0111 07:57:57.165967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.386µs"
	I0111 07:58:01.754086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.903µs"
	
	
	==> kube-proxy [2a89cf140b40e92be6883546b9d6eaa87b04656fede8886d6f6498fd3fef6b08] <==
	I0111 07:57:18.394292       1 server_others.go:69] "Using iptables proxy"
	I0111 07:57:18.403323       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I0111 07:57:18.421855       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:57:18.424130       1 server_others.go:152] "Using iptables Proxier"
	I0111 07:57:18.424167       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0111 07:57:18.424173       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0111 07:57:18.424194       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0111 07:57:18.424476       1 server.go:846] "Version info" version="v1.28.0"
	I0111 07:57:18.424495       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:18.426287       1 config.go:97] "Starting endpoint slice config controller"
	I0111 07:57:18.426285       1 config.go:188] "Starting service config controller"
	I0111 07:57:18.426341       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0111 07:57:18.427013       1 config.go:315] "Starting node config controller"
	I0111 07:57:18.427031       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0111 07:57:18.427050       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0111 07:57:18.527588       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0111 07:57:18.527639       1 shared_informer.go:318] Caches are synced for service config
	I0111 07:57:18.527642       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [21d8f6523f41f21ffeda4f2bbe6d7a6d6e3b5777d00e6636a3e24a10e4d289b6] <==
	I0111 07:57:15.065495       1 serving.go:348] Generated self-signed cert in-memory
	W0111 07:57:17.076411       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 07:57:17.076553       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 07:57:17.076577       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 07:57:17.076606       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 07:57:17.093471       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0111 07:57:17.093496       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:17.094673       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 07:57:17.094705       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0111 07:57:17.095499       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0111 07:57:17.095587       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0111 07:57:17.194917       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 11 07:57:31 old-k8s-version-086314 kubelet[739]: E0111 07:57:31.065677     739 projected.go:198] Error preparing data for projected volume kube-api-access-nwr2w for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jfjr: failed to sync configmap cache: timed out waiting for the condition
	Jan 11 07:57:31 old-k8s-version-086314 kubelet[739]: E0111 07:57:31.065757     739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eda0c88c-5a79-4493-85c4-7bc8e8cbd7e0-kube-api-access-nwr2w podName:eda0c88c-5a79-4493-85c4-7bc8e8cbd7e0 nodeName:}" failed. No retries permitted until 2026-01-11 07:57:31.565735623 +0000 UTC m=+17.656765643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nwr2w" (UniqueName: "kubernetes.io/projected/eda0c88c-5a79-4493-85c4-7bc8e8cbd7e0-kube-api-access-nwr2w") pod "kubernetes-dashboard-8694d4445c-5jfjr" (UID: "eda0c88c-5a79-4493-85c4-7bc8e8cbd7e0") : failed to sync configmap cache: timed out waiting for the condition
	Jan 11 07:57:31 old-k8s-version-086314 kubelet[739]: E0111 07:57:31.066731     739 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jan 11 07:57:31 old-k8s-version-086314 kubelet[739]: E0111 07:57:31.066767     739 projected.go:198] Error preparing data for projected volume kube-api-access-nlgmf for pod kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl: failed to sync configmap cache: timed out waiting for the condition
	Jan 11 07:57:31 old-k8s-version-086314 kubelet[739]: E0111 07:57:31.066875     739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88b37ac7-cfb3-404c-9878-76e1edd263ef-kube-api-access-nlgmf podName:88b37ac7-cfb3-404c-9878-76e1edd263ef nodeName:}" failed. No retries permitted until 2026-01-11 07:57:31.566849011 +0000 UTC m=+17.657879042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nlgmf" (UniqueName: "kubernetes.io/projected/88b37ac7-cfb3-404c-9878-76e1edd263ef-kube-api-access-nlgmf") pod "dashboard-metrics-scraper-5f989dc9cf-974nl" (UID: "88b37ac7-cfb3-404c-9878-76e1edd263ef") : failed to sync configmap cache: timed out waiting for the condition
	Jan 11 07:57:35 old-k8s-version-086314 kubelet[739]: I0111 07:57:35.086884     739 scope.go:117] "RemoveContainer" containerID="07b5ef40bef9366f55d3e9dcb2c55cab5cec15b5fe6b80eb538fdff9aba5a6c1"
	Jan 11 07:57:36 old-k8s-version-086314 kubelet[739]: I0111 07:57:36.091162     739 scope.go:117] "RemoveContainer" containerID="07b5ef40bef9366f55d3e9dcb2c55cab5cec15b5fe6b80eb538fdff9aba5a6c1"
	Jan 11 07:57:36 old-k8s-version-086314 kubelet[739]: I0111 07:57:36.091487     739 scope.go:117] "RemoveContainer" containerID="d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8"
	Jan 11 07:57:36 old-k8s-version-086314 kubelet[739]: E0111 07:57:36.091885     739 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-974nl_kubernetes-dashboard(88b37ac7-cfb3-404c-9878-76e1edd263ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl" podUID="88b37ac7-cfb3-404c-9878-76e1edd263ef"
	Jan 11 07:57:37 old-k8s-version-086314 kubelet[739]: I0111 07:57:37.099363     739 scope.go:117] "RemoveContainer" containerID="d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8"
	Jan 11 07:57:37 old-k8s-version-086314 kubelet[739]: E0111 07:57:37.099759     739 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-974nl_kubernetes-dashboard(88b37ac7-cfb3-404c-9878-76e1edd263ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl" podUID="88b37ac7-cfb3-404c-9878-76e1edd263ef"
	Jan 11 07:57:39 old-k8s-version-086314 kubelet[739]: I0111 07:57:39.120701     739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jfjr" podStartSLOduration=3.508553864 podCreationTimestamp="2026-01-11 07:57:29 +0000 UTC" firstStartedPulling="2026-01-11 07:57:31.78561267 +0000 UTC m=+17.876642698" lastFinishedPulling="2026-01-11 07:57:38.397693298 +0000 UTC m=+24.488723331" observedRunningTime="2026-01-11 07:57:39.120567096 +0000 UTC m=+25.211597136" watchObservedRunningTime="2026-01-11 07:57:39.120634497 +0000 UTC m=+25.211664534"
	Jan 11 07:57:41 old-k8s-version-086314 kubelet[739]: I0111 07:57:41.744497     739 scope.go:117] "RemoveContainer" containerID="d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8"
	Jan 11 07:57:41 old-k8s-version-086314 kubelet[739]: E0111 07:57:41.744761     739 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-974nl_kubernetes-dashboard(88b37ac7-cfb3-404c-9878-76e1edd263ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl" podUID="88b37ac7-cfb3-404c-9878-76e1edd263ef"
	Jan 11 07:57:49 old-k8s-version-086314 kubelet[739]: I0111 07:57:49.132383     739 scope.go:117] "RemoveContainer" containerID="2887e5b7f844309154ee2ebabd60573461082436207c18a29bc160eef0337f27"
	Jan 11 07:57:57 old-k8s-version-086314 kubelet[739]: I0111 07:57:57.002060     739 scope.go:117] "RemoveContainer" containerID="d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8"
	Jan 11 07:57:57 old-k8s-version-086314 kubelet[739]: I0111 07:57:57.153902     739 scope.go:117] "RemoveContainer" containerID="d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8"
	Jan 11 07:57:57 old-k8s-version-086314 kubelet[739]: I0111 07:57:57.154138     739 scope.go:117] "RemoveContainer" containerID="e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35"
	Jan 11 07:57:57 old-k8s-version-086314 kubelet[739]: E0111 07:57:57.154531     739 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-974nl_kubernetes-dashboard(88b37ac7-cfb3-404c-9878-76e1edd263ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl" podUID="88b37ac7-cfb3-404c-9878-76e1edd263ef"
	Jan 11 07:58:01 old-k8s-version-086314 kubelet[739]: I0111 07:58:01.744481     739 scope.go:117] "RemoveContainer" containerID="e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35"
	Jan 11 07:58:01 old-k8s-version-086314 kubelet[739]: E0111 07:58:01.744744     739 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-974nl_kubernetes-dashboard(88b37ac7-cfb3-404c-9878-76e1edd263ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl" podUID="88b37ac7-cfb3-404c-9878-76e1edd263ef"
	Jan 11 07:58:07 old-k8s-version-086314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:58:07 old-k8s-version-086314 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:58:07 old-k8s-version-086314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 07:58:07 old-k8s-version-086314 systemd[1]: kubelet.service: Consumed 1.555s CPU time.
	
	
	==> kubernetes-dashboard [ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6] <==
	2026/01/11 07:57:38 Using namespace: kubernetes-dashboard
	2026/01/11 07:57:38 Using in-cluster config to connect to apiserver
	2026/01/11 07:57:38 Using secret token for csrf signing
	2026/01/11 07:57:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 07:57:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 07:57:38 Successful initial request to the apiserver, version: v1.28.0
	2026/01/11 07:57:38 Generating JWE encryption key
	2026/01/11 07:57:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 07:57:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 07:57:38 Initializing JWE encryption key from synchronized object
	2026/01/11 07:57:38 Creating in-cluster Sidecar client
	2026/01/11 07:57:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:57:38 Serving insecurely on HTTP port: 9090
	2026/01/11 07:58:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:57:38 Starting overwatch
	
	
	==> storage-provisioner [2887e5b7f844309154ee2ebabd60573461082436207c18a29bc160eef0337f27] <==
	I0111 07:57:18.362029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 07:57:48.365552       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea] <==
	I0111 07:57:49.181178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:57:49.189654       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:57:49.189708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0111 07:58:06.587904       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:58:06.588095       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-086314_dca91392-e995-4255-871b-93055fc067ff!
	I0111 07:58:06.588050       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5462a82-2278-4014-bc22-31381ebd17ec", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-086314_dca91392-e995-4255-871b-93055fc067ff became leader
	I0111 07:58:06.688364       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-086314_dca91392-e995-4255-871b-93055fc067ff!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-086314 -n old-k8s-version-086314
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-086314 -n old-k8s-version-086314: exit status 2 (438.889913ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-086314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-086314
helpers_test.go:244: (dbg) docker inspect old-k8s-version-086314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e",
	        "Created": "2026-01-11T07:55:55.943439087Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 311002,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:57:07.577254184Z",
	            "FinishedAt": "2026-01-11T07:57:06.675212797Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/hosts",
	        "LogPath": "/var/lib/docker/containers/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e/3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e-json.log",
	        "Name": "/old-k8s-version-086314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-086314:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-086314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dea01a8b9cd82bb7717992a3417860bfe405db14a402ffeb04605a3dc1a3a6e",
	                "LowerDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98f2997c446d54f761fca052ac1e48a6fb561a1c57bfe758c99a4f45d76f0a83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-086314",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-086314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-086314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-086314",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-086314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "113864cbe9cbc0b20122fd36a7a1f529fa0e1e7d342a2f5c26ee4d3326d9afba",
	            "SandboxKey": "/var/run/docker/netns/113864cbe9cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-086314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9bb753c53c10f85ef7bfc0f02abe3d37237f5f78e229ae0d2842e242dc126077",
	                    "EndpointID": "427ac4491b9181dcfe1fabf2af5b1f95ed1347cb6f95ab09f6c8ab5e4181a3c2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "2e:07:6b:53:65:40",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-086314",
	                        "3dea01a8b9cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-086314 -n old-k8s-version-086314
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-086314 -n old-k8s-version-086314: exit status 2 (371.727318ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-086314 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-086314 logs -n 25: (1.320265852s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-181803 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo containerd config dump                                                                                                                                                                                                  │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ ssh     │ -p bridge-181803 sudo crio config                                                                                                                                                                                                             │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p bridge-181803                                                                                                                                                                                                                              │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p disable-driver-mounts-768607                                                                                                                                                                                                               │ disable-driver-mounts-768607 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ stop    │ -p no-preload-875594 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-285966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p embed-certs-285966 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-086314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p old-k8s-version-086314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p no-preload-875594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p no-preload-875594 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-285966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-585688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-585688 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-585688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ old-k8s-version-086314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p old-k8s-version-086314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:58:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:58:03.750293  321160 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:58:03.750380  321160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:03.750385  321160 out.go:374] Setting ErrFile to fd 2...
	I0111 07:58:03.750389  321160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:03.750603  321160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:58:03.751045  321160 out.go:368] Setting JSON to false
	I0111 07:58:03.752302  321160 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2432,"bootTime":1768115852,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:58:03.752356  321160 start.go:143] virtualization: kvm guest
	I0111 07:58:03.754333  321160 out.go:179] * [default-k8s-diff-port-585688] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:58:03.755670  321160 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:58:03.755682  321160 notify.go:221] Checking for updates...
	I0111 07:58:03.760338  321160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:58:03.761494  321160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:03.762557  321160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:58:03.763473  321160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:58:03.764542  321160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:58:03.765996  321160 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:03.766475  321160 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:58:03.790517  321160 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:58:03.790609  321160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:03.850389  321160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-11 07:58:03.839194751 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:03.850495  321160 docker.go:319] overlay module found
	I0111 07:58:03.852223  321160 out.go:179] * Using the docker driver based on existing profile
	I0111 07:58:03.853346  321160 start.go:309] selected driver: docker
	I0111 07:58:03.853363  321160 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:58:03.853470  321160 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:58:03.854110  321160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:03.909243  321160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-11 07:58:03.898847213 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:03.909498  321160 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:58:03.909520  321160 cni.go:84] Creating CNI manager for ""
	I0111 07:58:03.909574  321160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:58:03.909605  321160 start.go:353] cluster config:
	{Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:58:03.912090  321160 out.go:179] * Starting "default-k8s-diff-port-585688" primary control-plane node in "default-k8s-diff-port-585688" cluster
	I0111 07:58:03.913198  321160 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:58:03.914457  321160 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:58:03.915521  321160 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:58:03.915557  321160 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:58:03.915566  321160 cache.go:65] Caching tarball of preloaded images
	I0111 07:58:03.915650  321160 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:58:03.915647  321160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:58:03.915662  321160 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:58:03.915923  321160 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/config.json ...
	I0111 07:58:03.937090  321160 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:58:03.937113  321160 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:58:03.937135  321160 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:58:03.937167  321160 start.go:360] acquireMachinesLock for default-k8s-diff-port-585688: {Name:mk2e6a9de762d1bbe8860b89830bc1ce9534f022 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:58:03.937235  321160 start.go:364] duration metric: took 38.527µs to acquireMachinesLock for "default-k8s-diff-port-585688"
	I0111 07:58:03.937257  321160 start.go:96] Skipping create...Using existing machine configuration
	I0111 07:58:03.937263  321160 fix.go:54] fixHost starting: 
	I0111 07:58:03.937492  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:03.956257  321160 fix.go:112] recreateIfNeeded on default-k8s-diff-port-585688: state=Stopped err=<nil>
	W0111 07:58:03.956296  321160 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 07:58:03.958165  321160 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-585688" ...
	I0111 07:58:03.958234  321160 cli_runner.go:164] Run: docker start default-k8s-diff-port-585688
	I0111 07:58:04.218023  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:04.237128  321160 kic.go:430] container "default-k8s-diff-port-585688" state is running.
	I0111 07:58:04.237486  321160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-585688
	I0111 07:58:04.256566  321160 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/config.json ...
	I0111 07:58:04.256863  321160 machine.go:94] provisionDockerMachine start ...
	I0111 07:58:04.256948  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:04.276200  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:04.276485  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:04.276505  321160 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 07:58:04.277203  321160 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43966->127.0.0.1:33123: read: connection reset by peer
	I0111 07:58:07.423204  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585688
	
	I0111 07:58:07.423233  321160 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-585688"
	I0111 07:58:07.423302  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:07.443350  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:07.443550  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:07.443562  321160 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-585688 && echo "default-k8s-diff-port-585688" | sudo tee /etc/hostname
	I0111 07:58:07.600640  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585688
	
	I0111 07:58:07.600736  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:07.621646  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:07.621975  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:07.622014  321160 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-585688' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-585688/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-585688' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 07:58:07.770516  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 07:58:07.770544  321160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-4689/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-4689/.minikube}
	I0111 07:58:07.770594  321160 ubuntu.go:190] setting up certificates
	I0111 07:58:07.770613  321160 provision.go:84] configureAuth start
	I0111 07:58:07.770680  321160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-585688
	I0111 07:58:07.788321  321160 provision.go:143] copyHostCerts
	I0111 07:58:07.788406  321160 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem, removing ...
	I0111 07:58:07.788419  321160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem
	I0111 07:58:07.788492  321160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem (1082 bytes)
	I0111 07:58:07.788602  321160 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem, removing ...
	I0111 07:58:07.788611  321160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem
	I0111 07:58:07.788643  321160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem (1123 bytes)
	I0111 07:58:07.788720  321160 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem, removing ...
	I0111 07:58:07.788727  321160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem
	I0111 07:58:07.788752  321160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem (1675 bytes)
	I0111 07:58:07.788836  321160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-585688 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-585688 localhost minikube]
	I0111 07:58:07.847825  321160 provision.go:177] copyRemoteCerts
	I0111 07:58:07.847881  321160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 07:58:07.847920  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:07.866299  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:07.968665  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0111 07:58:07.986297  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 07:58:08.003675  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 07:58:08.020217  321160 provision.go:87] duration metric: took 249.589615ms to configureAuth
	I0111 07:58:08.020247  321160 ubuntu.go:206] setting minikube options for container-runtime
	I0111 07:58:08.020421  321160 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:08.020520  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.038641  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:08.038886  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:08.038906  321160 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 07:58:08.392034  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 07:58:08.392061  321160 machine.go:97] duration metric: took 4.135180228s to provisionDockerMachine
	I0111 07:58:08.392076  321160 start.go:293] postStartSetup for "default-k8s-diff-port-585688" (driver="docker")
	I0111 07:58:08.392091  321160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 07:58:08.392193  321160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 07:58:08.392248  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.413393  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:08.516961  321160 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 07:58:08.520383  321160 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 07:58:08.520406  321160 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 07:58:08.520417  321160 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/addons for local assets ...
	I0111 07:58:08.520476  321160 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/files for local assets ...
	I0111 07:58:08.520571  321160 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem -> 82382.pem in /etc/ssl/certs
	I0111 07:58:08.520693  321160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 07:58:08.528116  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:58:08.545008  321160 start.go:296] duration metric: took 152.917379ms for postStartSetup
	I0111 07:58:08.545089  321160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:58:08.545135  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.565305  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:08.663695  321160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 07:58:08.668649  321160 fix.go:56] duration metric: took 4.731380965s for fixHost
	I0111 07:58:08.668678  321160 start.go:83] releasing machines lock for "default-k8s-diff-port-585688", held for 4.731430414s
	I0111 07:58:08.668736  321160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-585688
	I0111 07:58:08.686470  321160 ssh_runner.go:195] Run: cat /version.json
	I0111 07:58:08.686520  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.686548  321160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 07:58:08.686604  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.705659  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:08.705850  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:08.802583  321160 ssh_runner.go:195] Run: systemctl --version
	I0111 07:58:08.857955  321160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 07:58:08.894328  321160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 07:58:08.899416  321160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 07:58:08.899467  321160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 07:58:08.907527  321160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 07:58:08.907546  321160 start.go:496] detecting cgroup driver to use...
	I0111 07:58:08.907572  321160 detect.go:178] detected "systemd" cgroup driver on host os
	I0111 07:58:08.907611  321160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 07:58:08.921690  321160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 07:58:08.934582  321160 docker.go:218] disabling cri-docker service (if available) ...
	I0111 07:58:08.934627  321160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 07:58:08.949397  321160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 07:58:08.961533  321160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 07:58:09.051133  321160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 07:58:09.145618  321160 docker.go:234] disabling docker service ...
	I0111 07:58:09.145678  321160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 07:58:09.162569  321160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 07:58:09.176818  321160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 07:58:09.269156  321160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 07:58:09.366007  321160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 07:58:09.378870  321160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 07:58:09.393343  321160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 07:58:09.393400  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.402014  321160 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 07:58:09.402068  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.411013  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.419523  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.428136  321160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 07:58:09.436223  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.445180  321160 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.453107  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.461768  321160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 07:58:09.469426  321160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 07:58:09.476697  321160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:58:09.570322  321160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 07:58:09.722296  321160 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 07:58:09.722379  321160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 07:58:09.727410  321160 start.go:574] Will wait 60s for crictl version
	I0111 07:58:09.727482  321160 ssh_runner.go:195] Run: which crictl
	I0111 07:58:09.731162  321160 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 07:58:09.761480  321160 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 07:58:09.761578  321160 ssh_runner.go:195] Run: crio --version
	I0111 07:58:09.790957  321160 ssh_runner.go:195] Run: crio --version
	I0111 07:58:09.821090  321160 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 07:58:09.822313  321160 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-585688 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:58:09.840952  321160 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0111 07:58:09.845556  321160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:58:09.856115  321160 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 07:58:09.856226  321160 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:58:09.856272  321160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:58:09.894086  321160 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:58:09.894116  321160 crio.go:433] Images already preloaded, skipping extraction
	I0111 07:58:09.894171  321160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:58:09.927564  321160 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:58:09.927585  321160 cache_images.go:86] Images are preloaded, skipping loading
	I0111 07:58:09.927595  321160 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.35.0 crio true true} ...
	I0111 07:58:09.927696  321160 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-585688 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 07:58:09.927766  321160 ssh_runner.go:195] Run: crio config
	I0111 07:58:09.979146  321160 cni.go:84] Creating CNI manager for ""
	I0111 07:58:09.979176  321160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:58:09.979195  321160 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 07:58:09.979224  321160 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-585688 NodeName:default-k8s-diff-port-585688 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 07:58:09.979417  321160 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-585688"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 07:58:09.979500  321160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 07:58:09.988051  321160 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 07:58:09.988123  321160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 07:58:09.996526  321160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0111 07:58:10.012228  321160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 07:58:10.025768  321160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0111 07:58:10.039959  321160 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0111 07:58:10.044677  321160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:58:10.055135  321160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:58:10.148780  321160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:58:10.176295  321160 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688 for IP: 192.168.94.2
	I0111 07:58:10.176324  321160 certs.go:195] generating shared ca certs ...
	I0111 07:58:10.176345  321160 certs.go:227] acquiring lock for ca certs: {Name:mk9f7a57f61b85f2c0f52e44b6a926769e368f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:10.176550  321160 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key
	I0111 07:58:10.176628  321160 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key
	I0111 07:58:10.176645  321160 certs.go:257] generating profile certs ...
	I0111 07:58:10.176777  321160 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/client.key
	I0111 07:58:10.176903  321160 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/apiserver.key.9fad6aad
	I0111 07:58:10.176972  321160 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/proxy-client.key
	I0111 07:58:10.177118  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem (1338 bytes)
	W0111 07:58:10.177163  321160 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238_empty.pem, impossibly tiny 0 bytes
	I0111 07:58:10.177180  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 07:58:10.177218  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem (1082 bytes)
	I0111 07:58:10.177251  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem (1123 bytes)
	I0111 07:58:10.177303  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem (1675 bytes)
	I0111 07:58:10.177362  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:58:10.178133  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 07:58:10.200055  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 07:58:10.220310  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 07:58:10.241596  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 07:58:10.264851  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0111 07:58:10.286183  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 07:58:10.305436  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 07:58:10.324464  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 07:58:10.341915  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 07:58:10.359530  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem --> /usr/share/ca-certificates/8238.pem (1338 bytes)
	I0111 07:58:10.378385  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /usr/share/ca-certificates/82382.pem (1708 bytes)
	I0111 07:58:10.398915  321160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 07:58:10.412463  321160 ssh_runner.go:195] Run: openssl version
	I0111 07:58:10.418410  321160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:58:10.425756  321160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 07:58:10.433151  321160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:58:10.436999  321160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:23 /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:58:10.437049  321160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:58:10.475544  321160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 07:58:10.482877  321160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8238.pem
	I0111 07:58:10.490480  321160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8238.pem /etc/ssl/certs/8238.pem
	I0111 07:58:10.498796  321160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8238.pem
	I0111 07:58:10.503030  321160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:27 /usr/share/ca-certificates/8238.pem
	I0111 07:58:10.503082  321160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8238.pem
	I0111 07:58:10.540672  321160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 07:58:10.548062  321160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/82382.pem
	I0111 07:58:10.555795  321160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/82382.pem /etc/ssl/certs/82382.pem
	I0111 07:58:10.564684  321160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82382.pem
	I0111 07:58:10.568782  321160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:27 /usr/share/ca-certificates/82382.pem
	I0111 07:58:10.568849  321160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82382.pem
	I0111 07:58:10.606242  321160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 07:58:10.614724  321160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 07:58:10.618608  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 07:58:10.657921  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 07:58:10.701121  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 07:58:10.752169  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 07:58:10.799346  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 07:58:10.859609  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 07:58:10.919705  321160 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:58:10.919881  321160 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:58:10.919982  321160 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:58:10.955395  321160 cri.go:96] found id: "8f22f3b6659f979a42e531e71e66b83f2e5a22b700a721070ae92fe483579939"
	I0111 07:58:10.955470  321160 cri.go:96] found id: "63e746d642de550ccfb3f5c5bee17e53f1ac0ac24283ebc2ad8f7153b15dec23"
	I0111 07:58:10.955484  321160 cri.go:96] found id: "4f9f1c19c75f7ac1ecbae6c98fc1987feb975fc126f5bcd1565546bc42504918"
	I0111 07:58:10.955504  321160 cri.go:96] found id: "a5a86114d8fe5ac425c7c94815a54aaa43fc1dc726e4e3b0be78ef6291f0b2df"
	I0111 07:58:10.955515  321160 cri.go:96] found id: ""
	I0111 07:58:10.955570  321160 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 07:58:10.974501  321160 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:10Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:58:10.974655  321160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 07:58:10.984862  321160 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 07:58:10.984883  321160 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 07:58:10.984947  321160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 07:58:10.993617  321160 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:58:10.994926  321160 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-585688" does not appear in /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:10.995844  321160 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-4689/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-585688" cluster setting kubeconfig missing "default-k8s-diff-port-585688" context setting]
	I0111 07:58:10.997160  321160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:10.999520  321160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 07:58:11.008328  321160 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I0111 07:58:11.008360  321160 kubeadm.go:602] duration metric: took 23.470875ms to restartPrimaryControlPlane
	I0111 07:58:11.008370  321160 kubeadm.go:403] duration metric: took 88.672136ms to StartCluster
	I0111 07:58:11.008390  321160 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:11.008453  321160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:11.010935  321160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:11.011207  321160 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:58:11.011310  321160 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:58:11.011405  321160 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-585688"
	I0111 07:58:11.011425  321160 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-585688"
	W0111 07:58:11.011437  321160 addons.go:248] addon storage-provisioner should already be in state true
	I0111 07:58:11.011443  321160 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:11.011468  321160 host.go:66] Checking if "default-k8s-diff-port-585688" exists ...
	I0111 07:58:11.011466  321160 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-585688"
	I0111 07:58:11.011494  321160 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-585688"
	I0111 07:58:11.011492  321160 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-585688"
	I0111 07:58:11.011525  321160 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-585688"
	W0111 07:58:11.011506  321160 addons.go:248] addon dashboard should already be in state true
	I0111 07:58:11.011687  321160 host.go:66] Checking if "default-k8s-diff-port-585688" exists ...
	I0111 07:58:11.011860  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:11.011974  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:11.012180  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:11.014031  321160 out.go:179] * Verifying Kubernetes components...
	I0111 07:58:11.015253  321160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:58:11.038372  321160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:58:11.039262  321160 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-585688"
	W0111 07:58:11.039284  321160 addons.go:248] addon default-storageclass should already be in state true
	I0111 07:58:11.039312  321160 host.go:66] Checking if "default-k8s-diff-port-585688" exists ...
	I0111 07:58:11.039655  321160 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 07:58:11.039708  321160 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:58:11.039725  321160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:58:11.039750  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:11.039774  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:11.041962  321160 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Jan 11 07:57:38 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:38.454308414Z" level=info msg="Created container ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jfjr/kubernetes-dashboard" id=19c5da80-5439-46d4-aeef-f9a13f394836 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:38 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:38.455369119Z" level=info msg="Starting container: ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6" id=fa76943a-adac-4cb8-8778-3a11ae399640 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:57:38 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:38.458218352Z" level=info msg="Started container" PID=1752 containerID=ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jfjr/kubernetes-dashboard id=fa76943a-adac-4cb8-8778-3a11ae399640 name=/runtime.v1.RuntimeService/StartContainer sandboxID=edf3c8cfc9e3ab3b6a240161e083aab9af76e9e28e9febf11296c37f37cc01be
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.132883512Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a66e3b9f-0c54-4cca-b9a5-c401ce433773 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.13386867Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=06a317df-b821-4291-bacd-ddfa3ce88bac name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.134930212Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5376f286-d2e1-445d-9d75-eb35e11140a4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.135089769Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.139873555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.140083005Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/99a6de00a7d032804068f0037f783b59ff11b2b85c0d3ca4a86cdfcc6563fd25/merged/etc/passwd: no such file or directory"
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.140116904Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/99a6de00a7d032804068f0037f783b59ff11b2b85c0d3ca4a86cdfcc6563fd25/merged/etc/group: no such file or directory"
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.140410516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.166543262Z" level=info msg="Created container 3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea: kube-system/storage-provisioner/storage-provisioner" id=5376f286-d2e1-445d-9d75-eb35e11140a4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.167116463Z" level=info msg="Starting container: 3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea" id=be1edc3b-4a3b-40a5-93e0-a46eab0c775d name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:57:49 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:49.168708716Z" level=info msg="Started container" PID=1774 containerID=3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea description=kube-system/storage-provisioner/storage-provisioner id=be1edc3b-4a3b-40a5-93e0-a46eab0c775d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ddc6863a51a676e59e2535422ad0591d88d6c5a88adbe24fcdf612dfb7f2fb6
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.002757534Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=47a12997-1eed-48c2-ab98-f61eb4897e38 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.003886334Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e7b15f6d-1709-4ed3-8470-07ca9da957ac name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.005087621Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl/dashboard-metrics-scraper" id=b4f2de02-cc9f-4fcc-8fd5-18571e05619f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.005236529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.010574353Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.011245812Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.043862301Z" level=info msg="Created container e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl/dashboard-metrics-scraper" id=b4f2de02-cc9f-4fcc-8fd5-18571e05619f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.044460754Z" level=info msg="Starting container: e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35" id=c6639d7d-c770-4f6f-aac6-f55657351ebf name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.046120885Z" level=info msg="Started container" PID=1813 containerID=e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl/dashboard-metrics-scraper id=c6639d7d-c770-4f6f-aac6-f55657351ebf name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f39934227c8f036237197340e447d04cffcef8ca1b56196fad990df4f9b7fbc
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.155253201Z" level=info msg="Removing container: d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8" id=3d7dbcd1-0cb3-43a1-bfe4-631983424a72 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:57:57 old-k8s-version-086314 crio[574]: time="2026-01-11T07:57:57.165103403Z" level=info msg="Removed container d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl/dashboard-metrics-scraper" id=3d7dbcd1-0cb3-43a1-bfe4-631983424a72 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e7d5e624df5a6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   0f39934227c8f       dashboard-metrics-scraper-5f989dc9cf-974nl       kubernetes-dashboard
	3873b8c331905       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   8ddc6863a51a6       storage-provisioner                              kube-system
	ffd86ac4a50fa       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   edf3c8cfc9e3a       kubernetes-dashboard-8694d4445c-5jfjr            kubernetes-dashboard
	1a68a697fa686       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   897ecbf6e6be4       busybox                                          default
	0dd22536edc65       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     0                   c2cc65fa850be       coredns-5dd5756b68-797jn                         kube-system
	2a89cf140b40e       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  0                   c4af81c0c2faf       kube-proxy-dl9xq                                 kube-system
	2887e5b7f8443       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   8ddc6863a51a6       storage-provisioner                              kube-system
	df31d3d09ca38       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   7b05ee17dd9b3       kindnet-9sfd9                                    kube-system
	21d8f6523f41f       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   b61a9599fdd39       kube-scheduler-old-k8s-version-086314            kube-system
	0edb54dd45275       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   a17a9da7f99c8       etcd-old-k8s-version-086314                      kube-system
	d6b9eea8c7a54       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   c50924beadbe0       kube-controller-manager-old-k8s-version-086314   kube-system
	9aea2caaf5413       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   21d898e699f54       kube-apiserver-old-k8s-version-086314            kube-system
	
	
	==> coredns [0dd22536edc65b5899a77d165fd818b4fd7c80d7bb59506a4d00cbcd12bd09f8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32973 - 63485 "HINFO IN 971775512634353947.4819203527870664801. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.041152422s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-086314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-086314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=old-k8s-version-086314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_56_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:56:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-086314
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:57:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:57:48 +0000   Sun, 11 Jan 2026 07:56:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:57:48 +0000   Sun, 11 Jan 2026 07:56:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:57:48 +0000   Sun, 11 Jan 2026 07:56:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:57:48 +0000   Sun, 11 Jan 2026 07:56:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-086314
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                d90f486e-fdcb-4fb9-aaa3-f827ad3b5d21
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-797jn                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-old-k8s-version-086314                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-9sfd9                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-086314             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-086314    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-dl9xq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-086314             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-974nl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-5jfjr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-086314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s                 kubelet          Node old-k8s-version-086314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s                 kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m2s                 kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-086314 event: Registered Node old-k8s-version-086314 in Controller
	  Normal  NodeReady                96s                  kubelet          Node old-k8s-version-086314 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x9 over 59s)    kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 59s)    kubelet          Node old-k8s-version-086314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x7 over 59s)    kubelet          Node old-k8s-version-086314 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                  node-controller  Node old-k8s-version-086314 event: Registered Node old-k8s-version-086314 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [0edb54dd4527597f168155b2e84150456758f58bfbdfc095e1eb8fb25625aba1] <==
	{"level":"info","ts":"2026-01-11T07:57:14.619691Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T07:57:14.619702Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T07:57:14.620552Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-11T07:57:14.620869Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T07:57:14.620976Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T07:57:14.621219Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:14.621246Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:14.62532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2026-01-11T07:57:14.626554Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2026-01-11T07:57:14.626866Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T07:57:14.626922Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T07:57:16.102435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:16.102487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:16.10252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:16.102533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:16.102538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:16.102546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:16.102553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:16.103507Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-086314 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:57:16.103568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:16.103591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:16.103893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:16.103981Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:16.104832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:57:16.105045Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 07:58:12 up 40 min,  0 user,  load average: 3.49, 3.46, 2.50
	Linux old-k8s-version-086314 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [df31d3d09ca389436cb3a01ded5e81bc1d65c374669f941d29494f6983c8f2ec] <==
	I0111 07:57:18.508287       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:57:18.508554       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0111 07:57:18.508750       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:57:18.508775       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:57:18.508785       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:57:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:57:18.808189       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:57:18.808265       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:57:18.808280       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:57:18.808430       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:57:19.108362       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:57:19.108396       1 metrics.go:72] Registering metrics
	I0111 07:57:19.108587       1 controller.go:711] "Syncing nftables rules"
	I0111 07:57:28.808040       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:57:28.808231       1 main.go:301] handling current node
	I0111 07:57:38.808273       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:57:38.808319       1 main.go:301] handling current node
	I0111 07:57:48.809058       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:57:48.809151       1 main.go:301] handling current node
	I0111 07:57:58.809012       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:57:58.809050       1 main.go:301] handling current node
	I0111 07:58:08.813868       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0111 07:58:08.813902       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9aea2caaf541351d25ce6ed8626deeaa0c2f10b82ca916b553d95e5d0f95e0b8] <==
	I0111 07:57:17.021649       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0111 07:57:17.085890       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:57:17.116857       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0111 07:57:17.116882       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0111 07:57:17.116904       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0111 07:57:17.116925       1 aggregator.go:166] initial CRD sync complete...
	I0111 07:57:17.116932       1 autoregister_controller.go:141] Starting autoregister controller
	I0111 07:57:17.116937       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 07:57:17.116942       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:57:17.117202       1 shared_informer.go:318] Caches are synced for configmaps
	I0111 07:57:17.117235       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 07:57:17.117307       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0111 07:57:17.117353       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0111 07:57:17.157502       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0111 07:57:17.906636       1 controller.go:624] quota admission added evaluator for: namespaces
	I0111 07:57:17.936927       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0111 07:57:17.953316       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:57:17.960589       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:57:17.967045       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0111 07:57:17.999908       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.80.176"}
	I0111 07:57:18.013108       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.217.137"}
	I0111 07:57:18.023257       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0111 07:57:29.905435       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0111 07:57:29.907839       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:57:29.915252       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d6b9eea8c7a54fe3616dbfe3782652d9def84a72df838c12acfb555c6b06365f] <==
	I0111 07:57:29.964436       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0111 07:57:29.981822       1 shared_informer.go:318] Caches are synced for disruption
	I0111 07:57:29.985342       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0111 07:57:29.990065       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="30.556551ms"
	I0111 07:57:29.990214       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0111 07:57:29.990235       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0111 07:57:29.990781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="451.309µs"
	I0111 07:57:29.990917       1 shared_informer.go:318] Caches are synced for stateful set
	I0111 07:57:30.003817       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.538504ms"
	I0111 07:57:30.004171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="148.365µs"
	I0111 07:57:30.024281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="112.095µs"
	I0111 07:57:30.061727       1 shared_informer.go:318] Caches are synced for resource quota
	I0111 07:57:30.089788       1 shared_informer.go:318] Caches are synced for resource quota
	I0111 07:57:30.417407       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 07:57:30.437584       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 07:57:30.437627       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0111 07:57:35.103734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.479µs"
	I0111 07:57:36.109990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="238.375µs"
	I0111 07:57:37.117501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="117.407µs"
	I0111 07:57:39.128181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.43427ms"
	I0111 07:57:39.128746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="106.32µs"
	I0111 07:57:53.219753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.497848ms"
	I0111 07:57:53.219926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="127.334µs"
	I0111 07:57:57.165967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.386µs"
	I0111 07:58:01.754086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.903µs"
	
	
	==> kube-proxy [2a89cf140b40e92be6883546b9d6eaa87b04656fede8886d6f6498fd3fef6b08] <==
	I0111 07:57:18.394292       1 server_others.go:69] "Using iptables proxy"
	I0111 07:57:18.403323       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I0111 07:57:18.421855       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:57:18.424130       1 server_others.go:152] "Using iptables Proxier"
	I0111 07:57:18.424167       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0111 07:57:18.424173       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0111 07:57:18.424194       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0111 07:57:18.424476       1 server.go:846] "Version info" version="v1.28.0"
	I0111 07:57:18.424495       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:18.426287       1 config.go:97] "Starting endpoint slice config controller"
	I0111 07:57:18.426285       1 config.go:188] "Starting service config controller"
	I0111 07:57:18.426341       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0111 07:57:18.427013       1 config.go:315] "Starting node config controller"
	I0111 07:57:18.427031       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0111 07:57:18.427050       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0111 07:57:18.527588       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0111 07:57:18.527639       1 shared_informer.go:318] Caches are synced for service config
	I0111 07:57:18.527642       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [21d8f6523f41f21ffeda4f2bbe6d7a6d6e3b5777d00e6636a3e24a10e4d289b6] <==
	I0111 07:57:15.065495       1 serving.go:348] Generated self-signed cert in-memory
	W0111 07:57:17.076411       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 07:57:17.076553       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 07:57:17.076577       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 07:57:17.076606       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 07:57:17.093471       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0111 07:57:17.093496       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:17.094673       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 07:57:17.094705       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0111 07:57:17.095499       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0111 07:57:17.095587       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0111 07:57:17.194917       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 11 07:57:31 old-k8s-version-086314 kubelet[739]: E0111 07:57:31.065677     739 projected.go:198] Error preparing data for projected volume kube-api-access-nwr2w for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jfjr: failed to sync configmap cache: timed out waiting for the condition
	Jan 11 07:57:31 old-k8s-version-086314 kubelet[739]: E0111 07:57:31.065757     739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eda0c88c-5a79-4493-85c4-7bc8e8cbd7e0-kube-api-access-nwr2w podName:eda0c88c-5a79-4493-85c4-7bc8e8cbd7e0 nodeName:}" failed. No retries permitted until 2026-01-11 07:57:31.565735623 +0000 UTC m=+17.656765643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nwr2w" (UniqueName: "kubernetes.io/projected/eda0c88c-5a79-4493-85c4-7bc8e8cbd7e0-kube-api-access-nwr2w") pod "kubernetes-dashboard-8694d4445c-5jfjr" (UID: "eda0c88c-5a79-4493-85c4-7bc8e8cbd7e0") : failed to sync configmap cache: timed out waiting for the condition
	Jan 11 07:57:31 old-k8s-version-086314 kubelet[739]: E0111 07:57:31.066731     739 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jan 11 07:57:31 old-k8s-version-086314 kubelet[739]: E0111 07:57:31.066767     739 projected.go:198] Error preparing data for projected volume kube-api-access-nlgmf for pod kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl: failed to sync configmap cache: timed out waiting for the condition
	Jan 11 07:57:31 old-k8s-version-086314 kubelet[739]: E0111 07:57:31.066875     739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88b37ac7-cfb3-404c-9878-76e1edd263ef-kube-api-access-nlgmf podName:88b37ac7-cfb3-404c-9878-76e1edd263ef nodeName:}" failed. No retries permitted until 2026-01-11 07:57:31.566849011 +0000 UTC m=+17.657879042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nlgmf" (UniqueName: "kubernetes.io/projected/88b37ac7-cfb3-404c-9878-76e1edd263ef-kube-api-access-nlgmf") pod "dashboard-metrics-scraper-5f989dc9cf-974nl" (UID: "88b37ac7-cfb3-404c-9878-76e1edd263ef") : failed to sync configmap cache: timed out waiting for the condition
	Jan 11 07:57:35 old-k8s-version-086314 kubelet[739]: I0111 07:57:35.086884     739 scope.go:117] "RemoveContainer" containerID="07b5ef40bef9366f55d3e9dcb2c55cab5cec15b5fe6b80eb538fdff9aba5a6c1"
	Jan 11 07:57:36 old-k8s-version-086314 kubelet[739]: I0111 07:57:36.091162     739 scope.go:117] "RemoveContainer" containerID="07b5ef40bef9366f55d3e9dcb2c55cab5cec15b5fe6b80eb538fdff9aba5a6c1"
	Jan 11 07:57:36 old-k8s-version-086314 kubelet[739]: I0111 07:57:36.091487     739 scope.go:117] "RemoveContainer" containerID="d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8"
	Jan 11 07:57:36 old-k8s-version-086314 kubelet[739]: E0111 07:57:36.091885     739 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-974nl_kubernetes-dashboard(88b37ac7-cfb3-404c-9878-76e1edd263ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl" podUID="88b37ac7-cfb3-404c-9878-76e1edd263ef"
	Jan 11 07:57:37 old-k8s-version-086314 kubelet[739]: I0111 07:57:37.099363     739 scope.go:117] "RemoveContainer" containerID="d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8"
	Jan 11 07:57:37 old-k8s-version-086314 kubelet[739]: E0111 07:57:37.099759     739 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-974nl_kubernetes-dashboard(88b37ac7-cfb3-404c-9878-76e1edd263ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl" podUID="88b37ac7-cfb3-404c-9878-76e1edd263ef"
	Jan 11 07:57:39 old-k8s-version-086314 kubelet[739]: I0111 07:57:39.120701     739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5jfjr" podStartSLOduration=3.508553864 podCreationTimestamp="2026-01-11 07:57:29 +0000 UTC" firstStartedPulling="2026-01-11 07:57:31.78561267 +0000 UTC m=+17.876642698" lastFinishedPulling="2026-01-11 07:57:38.397693298 +0000 UTC m=+24.488723331" observedRunningTime="2026-01-11 07:57:39.120567096 +0000 UTC m=+25.211597136" watchObservedRunningTime="2026-01-11 07:57:39.120634497 +0000 UTC m=+25.211664534"
	Jan 11 07:57:41 old-k8s-version-086314 kubelet[739]: I0111 07:57:41.744497     739 scope.go:117] "RemoveContainer" containerID="d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8"
	Jan 11 07:57:41 old-k8s-version-086314 kubelet[739]: E0111 07:57:41.744761     739 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-974nl_kubernetes-dashboard(88b37ac7-cfb3-404c-9878-76e1edd263ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl" podUID="88b37ac7-cfb3-404c-9878-76e1edd263ef"
	Jan 11 07:57:49 old-k8s-version-086314 kubelet[739]: I0111 07:57:49.132383     739 scope.go:117] "RemoveContainer" containerID="2887e5b7f844309154ee2ebabd60573461082436207c18a29bc160eef0337f27"
	Jan 11 07:57:57 old-k8s-version-086314 kubelet[739]: I0111 07:57:57.002060     739 scope.go:117] "RemoveContainer" containerID="d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8"
	Jan 11 07:57:57 old-k8s-version-086314 kubelet[739]: I0111 07:57:57.153902     739 scope.go:117] "RemoveContainer" containerID="d6dc0e457e50faf8b54d92edc77e33b862aa922d79b65842a2455710ff1790e8"
	Jan 11 07:57:57 old-k8s-version-086314 kubelet[739]: I0111 07:57:57.154138     739 scope.go:117] "RemoveContainer" containerID="e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35"
	Jan 11 07:57:57 old-k8s-version-086314 kubelet[739]: E0111 07:57:57.154531     739 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-974nl_kubernetes-dashboard(88b37ac7-cfb3-404c-9878-76e1edd263ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl" podUID="88b37ac7-cfb3-404c-9878-76e1edd263ef"
	Jan 11 07:58:01 old-k8s-version-086314 kubelet[739]: I0111 07:58:01.744481     739 scope.go:117] "RemoveContainer" containerID="e7d5e624df5a69fbe9f27d96d8e518459c67009ca71ee543407b20facfc94c35"
	Jan 11 07:58:01 old-k8s-version-086314 kubelet[739]: E0111 07:58:01.744744     739 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-974nl_kubernetes-dashboard(88b37ac7-cfb3-404c-9878-76e1edd263ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-974nl" podUID="88b37ac7-cfb3-404c-9878-76e1edd263ef"
	Jan 11 07:58:07 old-k8s-version-086314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:58:07 old-k8s-version-086314 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:58:07 old-k8s-version-086314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 07:58:07 old-k8s-version-086314 systemd[1]: kubelet.service: Consumed 1.555s CPU time.
	
	
	==> kubernetes-dashboard [ffd86ac4a50fa6d2ff0de994b260b31f36892f9782d8d5d20f9d83f614c2a8b6] <==
	2026/01/11 07:57:38 Starting overwatch
	2026/01/11 07:57:38 Using namespace: kubernetes-dashboard
	2026/01/11 07:57:38 Using in-cluster config to connect to apiserver
	2026/01/11 07:57:38 Using secret token for csrf signing
	2026/01/11 07:57:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 07:57:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 07:57:38 Successful initial request to the apiserver, version: v1.28.0
	2026/01/11 07:57:38 Generating JWE encryption key
	2026/01/11 07:57:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 07:57:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 07:57:38 Initializing JWE encryption key from synchronized object
	2026/01/11 07:57:38 Creating in-cluster Sidecar client
	2026/01/11 07:57:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:57:38 Serving insecurely on HTTP port: 9090
	2026/01/11 07:58:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2887e5b7f844309154ee2ebabd60573461082436207c18a29bc160eef0337f27] <==
	I0111 07:57:18.362029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 07:57:48.365552       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3873b8c3319057ffe28b5dd096d787bb61bf55ed50eb16c467bff2842ce12aea] <==
	I0111 07:57:49.181178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:57:49.189654       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:57:49.189708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0111 07:58:06.587904       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:58:06.588095       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-086314_dca91392-e995-4255-871b-93055fc067ff!
	I0111 07:58:06.588050       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5462a82-2278-4014-bc22-31381ebd17ec", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-086314_dca91392-e995-4255-871b-93055fc067ff became leader
	I0111 07:58:06.688364       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-086314_dca91392-e995-4255-871b-93055fc067ff!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-086314 -n old-k8s-version-086314
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-086314 -n old-k8s-version-086314: exit status 2 (369.611668ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-086314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-875594 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-875594 --alsologtostderr -v=1: exit status 80 (2.07579705s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-875594 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:58:14.088355  324875 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:58:14.088496  324875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:14.088505  324875 out.go:374] Setting ErrFile to fd 2...
	I0111 07:58:14.088511  324875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:14.089290  324875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:58:14.089639  324875 out.go:368] Setting JSON to false
	I0111 07:58:14.089664  324875 mustload.go:66] Loading cluster: no-preload-875594
	I0111 07:58:14.090167  324875 config.go:182] Loaded profile config "no-preload-875594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:14.090753  324875 cli_runner.go:164] Run: docker container inspect no-preload-875594 --format={{.State.Status}}
	I0111 07:58:14.118832  324875 host.go:66] Checking if "no-preload-875594" exists ...
	I0111 07:58:14.119490  324875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:14.187725  324875 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-11 07:58:14.176537714 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:14.188515  324875 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-875594 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 07:58:14.191100  324875 out.go:179] * Pausing node no-preload-875594 ... 
	I0111 07:58:14.192408  324875 host.go:66] Checking if "no-preload-875594" exists ...
	I0111 07:58:14.192727  324875 ssh_runner.go:195] Run: systemctl --version
	I0111 07:58:14.192775  324875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-875594
	I0111 07:58:14.215795  324875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/no-preload-875594/id_rsa Username:docker}
	I0111 07:58:14.321645  324875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:14.336404  324875 pause.go:52] kubelet running: true
	I0111 07:58:14.336467  324875 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:58:14.506833  324875 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:58:14.506941  324875 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:58:14.587079  324875 cri.go:96] found id: "8e14bd67002a296324708f9d3d673b05eb3191d9b342a923b346b535dc8eb241"
	I0111 07:58:14.587110  324875 cri.go:96] found id: "3af5364625761c8a158b701e137ef416af7b328b6f090ab3089466d0fc65554d"
	I0111 07:58:14.587117  324875 cri.go:96] found id: "c3df2fd79f57075ef7e6564c31e44d2dbd21643d54ef7484250e39a61d8830e3"
	I0111 07:58:14.587124  324875 cri.go:96] found id: "a1c19f090e2e328cdfb978691c2690587d4e415bd9b0a43536b9b302bf85ac31"
	I0111 07:58:14.587129  324875 cri.go:96] found id: "a15969534d074b462177067db2228455e3f65e99df3930f8fb7ec7f2f41160f8"
	I0111 07:58:14.587136  324875 cri.go:96] found id: "86dd9362e4e980669196db9a6fc9ed3859ae025e5b431399b21fd17fef766db2"
	I0111 07:58:14.587142  324875 cri.go:96] found id: "04186244a4d8af4b1401de6299129517961f7e404cad46c3f8e49cbbbe78c312"
	I0111 07:58:14.587147  324875 cri.go:96] found id: "a56589dc7dd4d5b92389fcdb2961a8b65557ca9359492ffc4fb9f790b05e4dd4"
	I0111 07:58:14.587153  324875 cri.go:96] found id: "8198831067c65d7e7b8f32225ee8852e7d6229fee77dc266d93b4d5a687f2cd3"
	I0111 07:58:14.587164  324875 cri.go:96] found id: "a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d"
	I0111 07:58:14.587174  324875 cri.go:96] found id: "c8f540e41b1874f8da0335702431dc675c4957ca96add1e1ff08f9e41f526ffa"
	I0111 07:58:14.587180  324875 cri.go:96] found id: ""
	I0111 07:58:14.587236  324875 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:58:14.602369  324875 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:14Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:58:14.898617  324875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:14.912957  324875 pause.go:52] kubelet running: false
	I0111 07:58:14.913006  324875 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:58:15.062736  324875 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:58:15.062835  324875 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:58:15.137787  324875 cri.go:96] found id: "8e14bd67002a296324708f9d3d673b05eb3191d9b342a923b346b535dc8eb241"
	I0111 07:58:15.137823  324875 cri.go:96] found id: "3af5364625761c8a158b701e137ef416af7b328b6f090ab3089466d0fc65554d"
	I0111 07:58:15.137830  324875 cri.go:96] found id: "c3df2fd79f57075ef7e6564c31e44d2dbd21643d54ef7484250e39a61d8830e3"
	I0111 07:58:15.137835  324875 cri.go:96] found id: "a1c19f090e2e328cdfb978691c2690587d4e415bd9b0a43536b9b302bf85ac31"
	I0111 07:58:15.137841  324875 cri.go:96] found id: "a15969534d074b462177067db2228455e3f65e99df3930f8fb7ec7f2f41160f8"
	I0111 07:58:15.137846  324875 cri.go:96] found id: "86dd9362e4e980669196db9a6fc9ed3859ae025e5b431399b21fd17fef766db2"
	I0111 07:58:15.137850  324875 cri.go:96] found id: "04186244a4d8af4b1401de6299129517961f7e404cad46c3f8e49cbbbe78c312"
	I0111 07:58:15.137855  324875 cri.go:96] found id: "a56589dc7dd4d5b92389fcdb2961a8b65557ca9359492ffc4fb9f790b05e4dd4"
	I0111 07:58:15.137859  324875 cri.go:96] found id: "8198831067c65d7e7b8f32225ee8852e7d6229fee77dc266d93b4d5a687f2cd3"
	I0111 07:58:15.137867  324875 cri.go:96] found id: "a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d"
	I0111 07:58:15.137870  324875 cri.go:96] found id: "c8f540e41b1874f8da0335702431dc675c4957ca96add1e1ff08f9e41f526ffa"
	I0111 07:58:15.137873  324875 cri.go:96] found id: ""
	I0111 07:58:15.137924  324875 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:58:15.711746  324875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:15.726593  324875 pause.go:52] kubelet running: false
	I0111 07:58:15.726660  324875 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:58:15.958748  324875 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:58:15.958856  324875 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:58:16.036248  324875 cri.go:96] found id: "8e14bd67002a296324708f9d3d673b05eb3191d9b342a923b346b535dc8eb241"
	I0111 07:58:16.036275  324875 cri.go:96] found id: "3af5364625761c8a158b701e137ef416af7b328b6f090ab3089466d0fc65554d"
	I0111 07:58:16.036281  324875 cri.go:96] found id: "c3df2fd79f57075ef7e6564c31e44d2dbd21643d54ef7484250e39a61d8830e3"
	I0111 07:58:16.036286  324875 cri.go:96] found id: "a1c19f090e2e328cdfb978691c2690587d4e415bd9b0a43536b9b302bf85ac31"
	I0111 07:58:16.036291  324875 cri.go:96] found id: "a15969534d074b462177067db2228455e3f65e99df3930f8fb7ec7f2f41160f8"
	I0111 07:58:16.036297  324875 cri.go:96] found id: "86dd9362e4e980669196db9a6fc9ed3859ae025e5b431399b21fd17fef766db2"
	I0111 07:58:16.036301  324875 cri.go:96] found id: "04186244a4d8af4b1401de6299129517961f7e404cad46c3f8e49cbbbe78c312"
	I0111 07:58:16.036305  324875 cri.go:96] found id: "a56589dc7dd4d5b92389fcdb2961a8b65557ca9359492ffc4fb9f790b05e4dd4"
	I0111 07:58:16.036309  324875 cri.go:96] found id: "8198831067c65d7e7b8f32225ee8852e7d6229fee77dc266d93b4d5a687f2cd3"
	I0111 07:58:16.036318  324875 cri.go:96] found id: "a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d"
	I0111 07:58:16.036335  324875 cri.go:96] found id: "c8f540e41b1874f8da0335702431dc675c4957ca96add1e1ff08f9e41f526ffa"
	I0111 07:58:16.036344  324875 cri.go:96] found id: ""
	I0111 07:58:16.036391  324875 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:58:16.052637  324875 out.go:203] 
	W0111 07:58:16.054113  324875 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:58:16.054130  324875 out.go:285] * 
	* 
	W0111 07:58:16.055974  324875 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:58:16.058400  324875 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-875594 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-875594
helpers_test.go:244: (dbg) docker inspect no-preload-875594:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676",
	        "Created": "2026-01-11T07:55:58.906575033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313241,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:57:15.380059408Z",
	            "FinishedAt": "2026-01-11T07:57:14.237616449Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/hostname",
	        "HostsPath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/hosts",
	        "LogPath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676-json.log",
	        "Name": "/no-preload-875594",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-875594:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-875594",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676",
	                "LowerDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-875594",
	                "Source": "/var/lib/docker/volumes/no-preload-875594/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-875594",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-875594",
	                "name.minikube.sigs.k8s.io": "no-preload-875594",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c840b7f762ffa22847a67e64e4eaa6c32604d14e49589caabb3e94fc1fcf13ca",
	            "SandboxKey": "/var/run/docker/netns/c840b7f762ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-875594": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e8428f34acce91e05870737f0d974d96eb8dd52ccd21658be3061a5635b849ca",
	                    "EndpointID": "61fefbdcd24068a6a0c9f13853091570a4e9a9a21d385e2d4c1d29b78db8ed0f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "0e:de:63:8a:15:c4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-875594",
	                        "7deefeed7d70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-875594 -n no-preload-875594
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-875594 -n no-preload-875594: exit status 2 (391.605553ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-875594 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-875594 logs -n 25: (1.553667725s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-181803 sudo crio config                                                                                                                                                                                                             │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p bridge-181803                                                                                                                                                                                                                              │ bridge-181803                │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ delete  │ -p disable-driver-mounts-768607                                                                                                                                                                                                               │ disable-driver-mounts-768607 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ stop    │ -p no-preload-875594 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-285966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p embed-certs-285966 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-086314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p old-k8s-version-086314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p no-preload-875594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p no-preload-875594 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-285966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-585688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-585688 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-585688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ old-k8s-version-086314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p old-k8s-version-086314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ image   │ no-preload-875594 image list --format=json                                                                                                                                                                                                    │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p no-preload-875594 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ embed-certs-285966 image list --format=json                                                                                                                                                                                                   │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p embed-certs-285966 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:58:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:58:03.750293  321160 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:58:03.750380  321160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:03.750385  321160 out.go:374] Setting ErrFile to fd 2...
	I0111 07:58:03.750389  321160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:03.750603  321160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:58:03.751045  321160 out.go:368] Setting JSON to false
	I0111 07:58:03.752302  321160 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2432,"bootTime":1768115852,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:58:03.752356  321160 start.go:143] virtualization: kvm guest
	I0111 07:58:03.754333  321160 out.go:179] * [default-k8s-diff-port-585688] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:58:03.755670  321160 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:58:03.755682  321160 notify.go:221] Checking for updates...
	I0111 07:58:03.760338  321160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:58:03.761494  321160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:03.762557  321160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:58:03.763473  321160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:58:03.764542  321160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:58:03.765996  321160 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:03.766475  321160 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:58:03.790517  321160 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:58:03.790609  321160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:03.850389  321160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-11 07:58:03.839194751 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:03.850495  321160 docker.go:319] overlay module found
	I0111 07:58:03.852223  321160 out.go:179] * Using the docker driver based on existing profile
	I0111 07:58:03.853346  321160 start.go:309] selected driver: docker
	I0111 07:58:03.853363  321160 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:58:03.853470  321160 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:58:03.854110  321160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:03.909243  321160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-11 07:58:03.898847213 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:03.909498  321160 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:58:03.909520  321160 cni.go:84] Creating CNI manager for ""
	I0111 07:58:03.909574  321160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:58:03.909605  321160 start.go:353] cluster config:
	{Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:58:03.912090  321160 out.go:179] * Starting "default-k8s-diff-port-585688" primary control-plane node in "default-k8s-diff-port-585688" cluster
	I0111 07:58:03.913198  321160 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:58:03.914457  321160 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:58:03.915521  321160 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:58:03.915557  321160 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:58:03.915566  321160 cache.go:65] Caching tarball of preloaded images
	I0111 07:58:03.915650  321160 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:58:03.915647  321160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:58:03.915662  321160 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:58:03.915923  321160 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/config.json ...
	I0111 07:58:03.937090  321160 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:58:03.937113  321160 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:58:03.937135  321160 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:58:03.937167  321160 start.go:360] acquireMachinesLock for default-k8s-diff-port-585688: {Name:mk2e6a9de762d1bbe8860b89830bc1ce9534f022 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:58:03.937235  321160 start.go:364] duration metric: took 38.527µs to acquireMachinesLock for "default-k8s-diff-port-585688"
	I0111 07:58:03.937257  321160 start.go:96] Skipping create...Using existing machine configuration
	I0111 07:58:03.937263  321160 fix.go:54] fixHost starting: 
	I0111 07:58:03.937492  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:03.956257  321160 fix.go:112] recreateIfNeeded on default-k8s-diff-port-585688: state=Stopped err=<nil>
	W0111 07:58:03.956296  321160 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 07:58:03.958165  321160 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-585688" ...
	I0111 07:58:03.958234  321160 cli_runner.go:164] Run: docker start default-k8s-diff-port-585688
	I0111 07:58:04.218023  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:04.237128  321160 kic.go:430] container "default-k8s-diff-port-585688" state is running.
	I0111 07:58:04.237486  321160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-585688
	I0111 07:58:04.256566  321160 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/config.json ...
	I0111 07:58:04.256863  321160 machine.go:94] provisionDockerMachine start ...
	I0111 07:58:04.256948  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:04.276200  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:04.276485  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:04.276505  321160 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 07:58:04.277203  321160 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43966->127.0.0.1:33123: read: connection reset by peer
	I0111 07:58:07.423204  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585688
	
	I0111 07:58:07.423233  321160 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-585688"
	I0111 07:58:07.423302  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:07.443350  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:07.443550  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:07.443562  321160 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-585688 && echo "default-k8s-diff-port-585688" | sudo tee /etc/hostname
	I0111 07:58:07.600640  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585688
	
	I0111 07:58:07.600736  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:07.621646  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:07.621975  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:07.622014  321160 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-585688' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-585688/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-585688' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 07:58:07.770516  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 07:58:07.770544  321160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-4689/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-4689/.minikube}
	I0111 07:58:07.770594  321160 ubuntu.go:190] setting up certificates
	I0111 07:58:07.770613  321160 provision.go:84] configureAuth start
	I0111 07:58:07.770680  321160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-585688
	I0111 07:58:07.788321  321160 provision.go:143] copyHostCerts
	I0111 07:58:07.788406  321160 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem, removing ...
	I0111 07:58:07.788419  321160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem
	I0111 07:58:07.788492  321160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem (1082 bytes)
	I0111 07:58:07.788602  321160 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem, removing ...
	I0111 07:58:07.788611  321160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem
	I0111 07:58:07.788643  321160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem (1123 bytes)
	I0111 07:58:07.788720  321160 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem, removing ...
	I0111 07:58:07.788727  321160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem
	I0111 07:58:07.788752  321160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem (1675 bytes)
	I0111 07:58:07.788836  321160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-585688 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-585688 localhost minikube]
	I0111 07:58:07.847825  321160 provision.go:177] copyRemoteCerts
	I0111 07:58:07.847881  321160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 07:58:07.847920  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:07.866299  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:07.968665  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0111 07:58:07.986297  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 07:58:08.003675  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 07:58:08.020217  321160 provision.go:87] duration metric: took 249.589615ms to configureAuth
	I0111 07:58:08.020247  321160 ubuntu.go:206] setting minikube options for container-runtime
	I0111 07:58:08.020421  321160 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:08.020520  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.038641  321160 main.go:144] libmachine: Using SSH client type: native
	I0111 07:58:08.038886  321160 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0111 07:58:08.038906  321160 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 07:58:08.392034  321160 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 07:58:08.392061  321160 machine.go:97] duration metric: took 4.135180228s to provisionDockerMachine
	I0111 07:58:08.392076  321160 start.go:293] postStartSetup for "default-k8s-diff-port-585688" (driver="docker")
	I0111 07:58:08.392091  321160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 07:58:08.392193  321160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 07:58:08.392248  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.413393  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:08.516961  321160 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 07:58:08.520383  321160 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 07:58:08.520406  321160 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 07:58:08.520417  321160 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/addons for local assets ...
	I0111 07:58:08.520476  321160 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/files for local assets ...
	I0111 07:58:08.520571  321160 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem -> 82382.pem in /etc/ssl/certs
	I0111 07:58:08.520693  321160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 07:58:08.528116  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:58:08.545008  321160 start.go:296] duration metric: took 152.917379ms for postStartSetup
	I0111 07:58:08.545089  321160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:58:08.545135  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.565305  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:08.663695  321160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 07:58:08.668649  321160 fix.go:56] duration metric: took 4.731380965s for fixHost
	I0111 07:58:08.668678  321160 start.go:83] releasing machines lock for "default-k8s-diff-port-585688", held for 4.731430414s
	I0111 07:58:08.668736  321160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-585688
	I0111 07:58:08.686470  321160 ssh_runner.go:195] Run: cat /version.json
	I0111 07:58:08.686520  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.686548  321160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 07:58:08.686604  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:08.705659  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:08.705850  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:08.802583  321160 ssh_runner.go:195] Run: systemctl --version
	I0111 07:58:08.857955  321160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 07:58:08.894328  321160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 07:58:08.899416  321160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 07:58:08.899467  321160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 07:58:08.907527  321160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 07:58:08.907546  321160 start.go:496] detecting cgroup driver to use...
	I0111 07:58:08.907572  321160 detect.go:178] detected "systemd" cgroup driver on host os
	I0111 07:58:08.907611  321160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 07:58:08.921690  321160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 07:58:08.934582  321160 docker.go:218] disabling cri-docker service (if available) ...
	I0111 07:58:08.934627  321160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 07:58:08.949397  321160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 07:58:08.961533  321160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 07:58:09.051133  321160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 07:58:09.145618  321160 docker.go:234] disabling docker service ...
	I0111 07:58:09.145678  321160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 07:58:09.162569  321160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 07:58:09.176818  321160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 07:58:09.269156  321160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 07:58:09.366007  321160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 07:58:09.378870  321160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 07:58:09.393343  321160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 07:58:09.393400  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.402014  321160 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 07:58:09.402068  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.411013  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.419523  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.428136  321160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 07:58:09.436223  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.445180  321160 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.453107  321160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:58:09.461768  321160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 07:58:09.469426  321160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 07:58:09.476697  321160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:58:09.570322  321160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 07:58:09.722296  321160 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 07:58:09.722379  321160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 07:58:09.727410  321160 start.go:574] Will wait 60s for crictl version
	I0111 07:58:09.727482  321160 ssh_runner.go:195] Run: which crictl
	I0111 07:58:09.731162  321160 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 07:58:09.761480  321160 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 07:58:09.761578  321160 ssh_runner.go:195] Run: crio --version
	I0111 07:58:09.790957  321160 ssh_runner.go:195] Run: crio --version
	I0111 07:58:09.821090  321160 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 07:58:09.822313  321160 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-585688 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:58:09.840952  321160 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0111 07:58:09.845556  321160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:58:09.856115  321160 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 07:58:09.856226  321160 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:58:09.856272  321160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:58:09.894086  321160 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:58:09.894116  321160 crio.go:433] Images already preloaded, skipping extraction
	I0111 07:58:09.894171  321160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:58:09.927564  321160 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:58:09.927585  321160 cache_images.go:86] Images are preloaded, skipping loading
	I0111 07:58:09.927595  321160 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.35.0 crio true true} ...
	I0111 07:58:09.927696  321160 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-585688 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 07:58:09.927766  321160 ssh_runner.go:195] Run: crio config
	I0111 07:58:09.979146  321160 cni.go:84] Creating CNI manager for ""
	I0111 07:58:09.979176  321160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:58:09.979195  321160 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 07:58:09.979224  321160 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-585688 NodeName:default-k8s-diff-port-585688 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 07:58:09.979417  321160 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-585688"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 07:58:09.979500  321160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 07:58:09.988051  321160 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 07:58:09.988123  321160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 07:58:09.996526  321160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0111 07:58:10.012228  321160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 07:58:10.025768  321160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0111 07:58:10.039959  321160 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0111 07:58:10.044677  321160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:58:10.055135  321160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:58:10.148780  321160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:58:10.176295  321160 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688 for IP: 192.168.94.2
	I0111 07:58:10.176324  321160 certs.go:195] generating shared ca certs ...
	I0111 07:58:10.176345  321160 certs.go:227] acquiring lock for ca certs: {Name:mk9f7a57f61b85f2c0f52e44b6a926769e368f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:10.176550  321160 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key
	I0111 07:58:10.176628  321160 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key
	I0111 07:58:10.176645  321160 certs.go:257] generating profile certs ...
	I0111 07:58:10.176777  321160 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/client.key
	I0111 07:58:10.176903  321160 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/apiserver.key.9fad6aad
	I0111 07:58:10.176972  321160 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/proxy-client.key
	I0111 07:58:10.177118  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem (1338 bytes)
	W0111 07:58:10.177163  321160 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238_empty.pem, impossibly tiny 0 bytes
	I0111 07:58:10.177180  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 07:58:10.177218  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem (1082 bytes)
	I0111 07:58:10.177251  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem (1123 bytes)
	I0111 07:58:10.177303  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem (1675 bytes)
	I0111 07:58:10.177362  321160 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:58:10.178133  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 07:58:10.200055  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 07:58:10.220310  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 07:58:10.241596  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 07:58:10.264851  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0111 07:58:10.286183  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 07:58:10.305436  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 07:58:10.324464  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/default-k8s-diff-port-585688/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 07:58:10.341915  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 07:58:10.359530  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem --> /usr/share/ca-certificates/8238.pem (1338 bytes)
	I0111 07:58:10.378385  321160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /usr/share/ca-certificates/82382.pem (1708 bytes)
	I0111 07:58:10.398915  321160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 07:58:10.412463  321160 ssh_runner.go:195] Run: openssl version
	I0111 07:58:10.418410  321160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:58:10.425756  321160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 07:58:10.433151  321160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:58:10.436999  321160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:23 /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:58:10.437049  321160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:58:10.475544  321160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 07:58:10.482877  321160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8238.pem
	I0111 07:58:10.490480  321160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8238.pem /etc/ssl/certs/8238.pem
	I0111 07:58:10.498796  321160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8238.pem
	I0111 07:58:10.503030  321160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:27 /usr/share/ca-certificates/8238.pem
	I0111 07:58:10.503082  321160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8238.pem
	I0111 07:58:10.540672  321160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 07:58:10.548062  321160 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/82382.pem
	I0111 07:58:10.555795  321160 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/82382.pem /etc/ssl/certs/82382.pem
	I0111 07:58:10.564684  321160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82382.pem
	I0111 07:58:10.568782  321160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:27 /usr/share/ca-certificates/82382.pem
	I0111 07:58:10.568849  321160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82382.pem
	I0111 07:58:10.606242  321160 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 07:58:10.614724  321160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 07:58:10.618608  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 07:58:10.657921  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 07:58:10.701121  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 07:58:10.752169  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 07:58:10.799346  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 07:58:10.859609  321160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 07:58:10.919705  321160 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-585688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-585688 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:58:10.919881  321160 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:58:10.919982  321160 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:58:10.955395  321160 cri.go:96] found id: "8f22f3b6659f979a42e531e71e66b83f2e5a22b700a721070ae92fe483579939"
	I0111 07:58:10.955470  321160 cri.go:96] found id: "63e746d642de550ccfb3f5c5bee17e53f1ac0ac24283ebc2ad8f7153b15dec23"
	I0111 07:58:10.955484  321160 cri.go:96] found id: "4f9f1c19c75f7ac1ecbae6c98fc1987feb975fc126f5bcd1565546bc42504918"
	I0111 07:58:10.955504  321160 cri.go:96] found id: "a5a86114d8fe5ac425c7c94815a54aaa43fc1dc726e4e3b0be78ef6291f0b2df"
	I0111 07:58:10.955515  321160 cri.go:96] found id: ""
	I0111 07:58:10.955570  321160 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 07:58:10.974501  321160 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:10Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:58:10.974655  321160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 07:58:10.984862  321160 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 07:58:10.984883  321160 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 07:58:10.984947  321160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 07:58:10.993617  321160 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:58:10.994926  321160 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-585688" does not appear in /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:10.995844  321160 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-4689/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-585688" cluster setting kubeconfig missing "default-k8s-diff-port-585688" context setting]
	I0111 07:58:10.997160  321160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:10.999520  321160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 07:58:11.008328  321160 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I0111 07:58:11.008360  321160 kubeadm.go:602] duration metric: took 23.470875ms to restartPrimaryControlPlane
	I0111 07:58:11.008370  321160 kubeadm.go:403] duration metric: took 88.672136ms to StartCluster
	I0111 07:58:11.008390  321160 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:11.008453  321160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:11.010935  321160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:11.011207  321160 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:58:11.011310  321160 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:58:11.011405  321160 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-585688"
	I0111 07:58:11.011425  321160 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-585688"
	W0111 07:58:11.011437  321160 addons.go:248] addon storage-provisioner should already be in state true
	I0111 07:58:11.011443  321160 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:11.011468  321160 host.go:66] Checking if "default-k8s-diff-port-585688" exists ...
	I0111 07:58:11.011466  321160 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-585688"
	I0111 07:58:11.011494  321160 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-585688"
	I0111 07:58:11.011492  321160 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-585688"
	I0111 07:58:11.011525  321160 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-585688"
	W0111 07:58:11.011506  321160 addons.go:248] addon dashboard should already be in state true
	I0111 07:58:11.011687  321160 host.go:66] Checking if "default-k8s-diff-port-585688" exists ...
	I0111 07:58:11.011860  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:11.011974  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:11.012180  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:11.014031  321160 out.go:179] * Verifying Kubernetes components...
	I0111 07:58:11.015253  321160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:58:11.038372  321160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:58:11.039262  321160 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-585688"
	W0111 07:58:11.039284  321160 addons.go:248] addon default-storageclass should already be in state true
	I0111 07:58:11.039312  321160 host.go:66] Checking if "default-k8s-diff-port-585688" exists ...
	I0111 07:58:11.039655  321160 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 07:58:11.039708  321160 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:58:11.039725  321160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:58:11.039750  321160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:58:11.039774  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:11.041962  321160 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 07:58:11.042970  321160 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 07:58:11.042992  321160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 07:58:11.043048  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:11.069294  321160 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:58:11.069340  321160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:58:11.069402  321160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:58:11.080336  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:11.081959  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:11.105075  321160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:58:11.194615  321160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:58:11.213444  321160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:58:11.213960  321160 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-585688" to be "Ready" ...
	I0111 07:58:11.218996  321160 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 07:58:11.219068  321160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 07:58:11.231143  321160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:58:11.236556  321160 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 07:58:11.236637  321160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 07:58:11.255192  321160 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 07:58:11.255218  321160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 07:58:11.274461  321160 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 07:58:11.274487  321160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 07:58:11.294158  321160 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 07:58:11.294186  321160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 07:58:11.311976  321160 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 07:58:11.312005  321160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 07:58:11.327541  321160 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 07:58:11.327570  321160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 07:58:11.340600  321160 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 07:58:11.340626  321160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 07:58:11.355838  321160 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 07:58:11.355864  321160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 07:58:11.369500  321160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 07:58:12.581033  321160 node_ready.go:49] node "default-k8s-diff-port-585688" is "Ready"
	I0111 07:58:12.581075  321160 node_ready.go:38] duration metric: took 1.367076206s for node "default-k8s-diff-port-585688" to be "Ready" ...
	I0111 07:58:12.581099  321160 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:58:12.581161  321160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:58:13.188530  321160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.975045414s)
	I0111 07:58:13.188653  321160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.957477399s)
	I0111 07:58:13.188768  321160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.819233916s)
	I0111 07:58:13.188843  321160 api_server.go:72] duration metric: took 2.177603881s to wait for apiserver process to appear ...
	I0111 07:58:13.188864  321160 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:58:13.188886  321160 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0111 07:58:13.190609  321160 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-585688 addons enable metrics-server
	
	I0111 07:58:13.193744  321160 api_server.go:325] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:58:13.193770  321160 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:58:13.196513  321160 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0111 07:58:13.197653  321160 addons.go:530] duration metric: took 2.186350851s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0111 07:58:13.689700  321160 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0111 07:58:13.696308  321160 api_server.go:325] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:58:13.696338  321160 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	
	
	==> CRI-O <==
	Jan 11 07:57:43 no-preload-875594 crio[570]: time="2026-01-11T07:57:43.169321907Z" level=info msg="Started container" PID=1763 containerID=5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper id=0ece006e-8469-41cf-82c4-f6435128eff9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f19ec2fc838b20302373dc9424073e7cf3296fd833154a46c4b0e1893316676e
	Jan 11 07:57:43 no-preload-875594 crio[570]: time="2026-01-11T07:57:43.213762927Z" level=info msg="Removing container: 7de5159f3687df4074a89fdcea0b5e3baf17cacaa1b560ed886a26db786b9bca" id=c3407b8b-c219-490f-b41c-87b094a98ff4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:57:43 no-preload-875594 crio[570]: time="2026-01-11T07:57:43.222009812Z" level=info msg="Removed container 7de5159f3687df4074a89fdcea0b5e3baf17cacaa1b560ed886a26db786b9bca: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper" id=c3407b8b-c219-490f-b41c-87b094a98ff4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.24799573Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4b091fd0-2e1b-4ff0-83cc-3beac9eb7f8f name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.24893956Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eb725990-f981-4e95-95c2-f78259f8cf54 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.249988929Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=365330ca-4be5-46dd-bbb6-8935314e1298 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.25012987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.25518514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.255346981Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d854b0e922c36b5a6d569ec5f5f35c156bfbbc70db5a69f84701c23b2752d3d1/merged/etc/passwd: no such file or directory"
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.255371353Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d854b0e922c36b5a6d569ec5f5f35c156bfbbc70db5a69f84701c23b2752d3d1/merged/etc/group: no such file or directory"
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.256292377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.280227004Z" level=info msg="Created container 8e14bd67002a296324708f9d3d673b05eb3191d9b342a923b346b535dc8eb241: kube-system/storage-provisioner/storage-provisioner" id=365330ca-4be5-46dd-bbb6-8935314e1298 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.280848899Z" level=info msg="Starting container: 8e14bd67002a296324708f9d3d673b05eb3191d9b342a923b346b535dc8eb241" id=ffd13d6a-08d5-442a-8e02-466daf34054b name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.282640881Z" level=info msg="Started container" PID=1777 containerID=8e14bd67002a296324708f9d3d673b05eb3191d9b342a923b346b535dc8eb241 description=kube-system/storage-provisioner/storage-provisioner id=ffd13d6a-08d5-442a-8e02-466daf34054b name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c2215136938b327b572ac2cd035193ab713997e1cea06e20720a622c1e75525
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.119877462Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a1dc5002-5b55-4a52-862b-b5654c477260 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.120838134Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1bd909b5-e2a9-4309-8f1c-5542ce912ad4 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.121902013Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper" id=b56c7a0e-4adb-4bd7-9288-56fc199f96ed name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.122036939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.128295726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.128744185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.15225624Z" level=info msg="Created container a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper" id=b56c7a0e-4adb-4bd7-9288-56fc199f96ed name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.152840636Z" level=info msg="Starting container: a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d" id=0b669824-be6a-4255-b729-8587a3a8732c name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.154526275Z" level=info msg="Started container" PID=1816 containerID=a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper id=0b669824-be6a-4255-b729-8587a3a8732c name=/runtime.v1.RuntimeService/StartContainer sandboxID=f19ec2fc838b20302373dc9424073e7cf3296fd833154a46c4b0e1893316676e
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.274852676Z" level=info msg="Removing container: 5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3" id=cddbbc57-ce3d-43fe-a427-b00d4723b0a8 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.284409068Z" level=info msg="Removed container 5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper" id=cddbbc57-ce3d-43fe-a427-b00d4723b0a8 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a8c9b7ad1d71f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   3                   f19ec2fc838b2       dashboard-metrics-scraper-867fb5f87b-944ph   kubernetes-dashboard
	8e14bd67002a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   3c2215136938b       storage-provisioner                          kube-system
	c8f540e41b187       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   d4a027b93023d       kubernetes-dashboard-b84665fb8-rwbwb         kubernetes-dashboard
	1f5652c6ae96c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   ddba5528c98b5       busybox                                      default
	3af5364625761       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           51 seconds ago      Running             coredns                     0                   44e5c4674cfe5       coredns-7d764666f9-gw9b6                     kube-system
	c3df2fd79f570       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           51 seconds ago      Running             kube-proxy                  0                   f4552eea3b3ec       kube-proxy-49b4x                             kube-system
	a1c19f090e2e3       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           51 seconds ago      Running             kindnet-cni                 0                   b9a8c685f1859       kindnet-p7w57                                kube-system
	a15969534d074       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   3c2215136938b       storage-provisioner                          kube-system
	86dd9362e4e98       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           54 seconds ago      Running             etcd                        0                   a60701e0c2c26       etcd-no-preload-875594                       kube-system
	04186244a4d8a       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           54 seconds ago      Running             kube-apiserver              0                   f9a4f746d9d2a       kube-apiserver-no-preload-875594             kube-system
	a56589dc7dd4d       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           54 seconds ago      Running             kube-controller-manager     0                   d2be7c153d3e5       kube-controller-manager-no-preload-875594    kube-system
	8198831067c65       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           54 seconds ago      Running             kube-scheduler              0                   937507af235b6       kube-scheduler-no-preload-875594             kube-system
	
	
	==> coredns [3af5364625761c8a158b701e137ef416af7b328b6f090ab3089466d0fc65554d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] 127.0.0.1:36671 - 33943 "HINFO IN 429470345494285788.1098832509106181362. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.434608777s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-875594
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-875594
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=no-preload-875594
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_56_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-875594
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:58:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:57:55 +0000   Sun, 11 Jan 2026 07:56:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:57:55 +0000   Sun, 11 Jan 2026 07:56:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:57:55 +0000   Sun, 11 Jan 2026 07:56:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:57:55 +0000   Sun, 11 Jan 2026 07:56:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-875594
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                47c67b25-2f7f-4c80-9308-fd505f15ded0
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-gw9b6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-no-preload-875594                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-p7w57                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-no-preload-875594              250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-875594     200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-49b4x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-no-preload-875594              100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-944ph    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-rwbwb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node no-preload-875594 event: Registered Node no-preload-875594 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node no-preload-875594 event: Registered Node no-preload-875594 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [86dd9362e4e980669196db9a6fc9ed3859ae025e5b431399b21fd17fef766db2] <==
	{"level":"info","ts":"2026-01-11T07:57:22.696942Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T07:57:22.697002Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T07:57:22.697215Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2026-01-11T07:57:22.696552Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:22.697450Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T07:57:22.697510Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T07:57:22.697990Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:23.686757Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:23.686845Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:23.687009Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:23.687061Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:57:23.687081Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:23.687753Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:23.687776Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:57:23.687818Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:23.687835Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:23.689049Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-875594 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:57:23.689079Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:23.689071Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:23.689255Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:23.689282Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:23.691400Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:57:23.691461Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:57:23.694229Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:57:23.694234Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 07:58:17 up 40 min,  0 user,  load average: 3.37, 3.44, 2.50
	Linux no-preload-875594 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a1c19f090e2e328cdfb978691c2690587d4e415bd9b0a43536b9b302bf85ac31] <==
	I0111 07:57:25.752935       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:57:25.753331       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 07:57:25.753574       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:57:25.753612       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:57:25.753641       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:57:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:57:25.960233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:57:25.960264       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:57:25.960289       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:57:25.960477       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:57:26.352995       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:57:26.353027       1 metrics.go:72] Registering metrics
	I0111 07:57:26.353089       1 controller.go:711] "Syncing nftables rules"
	I0111 07:57:35.960932       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:57:35.961064       1 main.go:301] handling current node
	I0111 07:57:45.961882       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:57:45.961932       1 main.go:301] handling current node
	I0111 07:57:55.961022       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:57:55.961072       1 main.go:301] handling current node
	I0111 07:58:05.961000       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:58:05.961034       1 main.go:301] handling current node
	I0111 07:58:15.963924       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:58:15.963961       1 main.go:301] handling current node
	
	
	==> kube-apiserver [04186244a4d8af4b1401de6299129517961f7e404cad46c3f8e49cbbbe78c312] <==
	I0111 07:57:24.755417       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 07:57:24.755430       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 07:57:24.755715       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:24.755778       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 07:57:24.755845       1 aggregator.go:187] initial CRD sync complete...
	I0111 07:57:24.755857       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 07:57:24.755865       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 07:57:24.755872       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:57:24.756071       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 07:57:24.763067       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 07:57:24.770064       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 07:57:24.788326       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0111 07:57:24.817295       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 07:57:25.055920       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:57:25.087277       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:57:25.115422       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:57:25.124014       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:57:25.132613       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:57:25.167214       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.43.79"}
	I0111 07:57:25.177410       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.31.35"}
	I0111 07:57:25.659486       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:57:28.358694       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:57:28.409456       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:57:28.409456       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:57:28.609367       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a56589dc7dd4d5b92389fcdb2961a8b65557ca9359492ffc4fb9f790b05e4dd4] <==
	I0111 07:57:27.918247       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.918411       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.918487       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.918732       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.919254       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.919557       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.920027       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.920399       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.920728       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:27.920973       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.921161       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.921232       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.921601       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.921962       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.925658       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.925678       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.927548       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.929529       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.934938       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.935288       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.938240       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:28.016999       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:28.017083       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:57:28.017094       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:57:28.020887       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [c3df2fd79f57075ef7e6564c31e44d2dbd21643d54ef7484250e39a61d8830e3] <==
	I0111 07:57:25.527137       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:57:25.591132       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:25.691639       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:25.691681       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 07:57:25.691793       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:57:25.715644       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:57:25.715722       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:57:25.722403       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:57:25.723375       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:57:25.723424       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:25.725406       1 config.go:309] "Starting node config controller"
	I0111 07:57:25.725426       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:57:25.725440       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:57:25.725457       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:57:25.725485       1 config.go:200] "Starting service config controller"
	I0111 07:57:25.725492       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:57:25.725855       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:57:25.725875       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:57:25.825623       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:57:25.825643       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:57:25.826502       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:57:25.827683       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8198831067c65d7e7b8f32225ee8852e7d6229fee77dc266d93b4d5a687f2cd3] <==
	I0111 07:57:23.090088       1 serving.go:386] Generated self-signed cert in-memory
	W0111 07:57:24.686548       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 07:57:24.686588       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0111 07:57:24.686602       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 07:57:24.686611       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 07:57:24.714400       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 07:57:24.714489       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:24.716434       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 07:57:24.716467       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:24.716625       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 07:57:24.716700       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 07:57:24.817618       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:57:42 no-preload-875594 kubelet[718]: E0111 07:57:42.209124     718 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-875594" containerName="kube-apiserver"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: E0111 07:57:43.119878     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: I0111 07:57:43.119914     718 scope.go:122] "RemoveContainer" containerID="7de5159f3687df4074a89fdcea0b5e3baf17cacaa1b560ed886a26db786b9bca"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: I0111 07:57:43.212632     718 scope.go:122] "RemoveContainer" containerID="7de5159f3687df4074a89fdcea0b5e3baf17cacaa1b560ed886a26db786b9bca"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: E0111 07:57:43.212979     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: I0111 07:57:43.213017     718 scope.go:122] "RemoveContainer" containerID="5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: E0111 07:57:43.213276     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-944ph_kubernetes-dashboard(8566b7d9-2a44-457d-b1a7-776b0275faa6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" podUID="8566b7d9-2a44-457d-b1a7-776b0275faa6"
	Jan 11 07:57:51 no-preload-875594 kubelet[718]: E0111 07:57:51.601304     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:51 no-preload-875594 kubelet[718]: I0111 07:57:51.601360     718 scope.go:122] "RemoveContainer" containerID="5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3"
	Jan 11 07:57:51 no-preload-875594 kubelet[718]: E0111 07:57:51.601623     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-944ph_kubernetes-dashboard(8566b7d9-2a44-457d-b1a7-776b0275faa6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" podUID="8566b7d9-2a44-457d-b1a7-776b0275faa6"
	Jan 11 07:57:56 no-preload-875594 kubelet[718]: I0111 07:57:56.247480     718 scope.go:122] "RemoveContainer" containerID="a15969534d074b462177067db2228455e3f65e99df3930f8fb7ec7f2f41160f8"
	Jan 11 07:58:00 no-preload-875594 kubelet[718]: E0111 07:58:00.763048     718 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gw9b6" containerName="coredns"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: E0111 07:58:05.119198     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: I0111 07:58:05.119249     718 scope.go:122] "RemoveContainer" containerID="5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: I0111 07:58:05.273496     718 scope.go:122] "RemoveContainer" containerID="5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: E0111 07:58:05.273711     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: I0111 07:58:05.273753     718 scope.go:122] "RemoveContainer" containerID="a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: E0111 07:58:05.273981     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-944ph_kubernetes-dashboard(8566b7d9-2a44-457d-b1a7-776b0275faa6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" podUID="8566b7d9-2a44-457d-b1a7-776b0275faa6"
	Jan 11 07:58:11 no-preload-875594 kubelet[718]: E0111 07:58:11.600990     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:11 no-preload-875594 kubelet[718]: I0111 07:58:11.601040     718 scope.go:122] "RemoveContainer" containerID="a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d"
	Jan 11 07:58:11 no-preload-875594 kubelet[718]: E0111 07:58:11.601258     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-944ph_kubernetes-dashboard(8566b7d9-2a44-457d-b1a7-776b0275faa6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" podUID="8566b7d9-2a44-457d-b1a7-776b0275faa6"
	Jan 11 07:58:14 no-preload-875594 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:58:14 no-preload-875594 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:58:14 no-preload-875594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 07:58:14 no-preload-875594 systemd[1]: kubelet.service: Consumed 1.735s CPU time.
	
	
	==> kubernetes-dashboard [c8f540e41b1874f8da0335702431dc675c4957ca96add1e1ff08f9e41f526ffa] <==
	2026/01/11 07:57:35 Starting overwatch
	2026/01/11 07:57:35 Using namespace: kubernetes-dashboard
	2026/01/11 07:57:35 Using in-cluster config to connect to apiserver
	2026/01/11 07:57:35 Using secret token for csrf signing
	2026/01/11 07:57:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 07:57:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 07:57:35 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 07:57:35 Generating JWE encryption key
	2026/01/11 07:57:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 07:57:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 07:57:35 Initializing JWE encryption key from synchronized object
	2026/01/11 07:57:35 Creating in-cluster Sidecar client
	2026/01/11 07:57:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:57:35 Serving insecurely on HTTP port: 9090
	2026/01/11 07:58:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8e14bd67002a296324708f9d3d673b05eb3191d9b342a923b346b535dc8eb241] <==
	I0111 07:57:56.295160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:57:56.302763       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:57:56.302834       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 07:57:56.305022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:59.760787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:04.021575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:07.620474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:10.674008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:13.698671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:13.704905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:58:13.705088       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:58:13.705271       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-875594_1f3c10ca-609d-41d5-9c8d-a80c9e794387!
	I0111 07:58:13.705419       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"adcc6b4b-1fe9-448b-b2d9-cf067e532a3b", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-875594_1f3c10ca-609d-41d5-9c8d-a80c9e794387 became leader
	W0111 07:58:13.708189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:13.717213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:58:13.805733       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-875594_1f3c10ca-609d-41d5-9c8d-a80c9e794387!
	W0111 07:58:15.720094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:15.724439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:17.730400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:17.738550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a15969534d074b462177067db2228455e3f65e99df3930f8fb7ec7f2f41160f8] <==
	I0111 07:57:25.499240       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 07:57:55.501349       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-875594 -n no-preload-875594
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-875594 -n no-preload-875594: exit status 2 (454.300904ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-875594 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-875594
helpers_test.go:244: (dbg) docker inspect no-preload-875594:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676",
	        "Created": "2026-01-11T07:55:58.906575033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313241,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:57:15.380059408Z",
	            "FinishedAt": "2026-01-11T07:57:14.237616449Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/hostname",
	        "HostsPath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/hosts",
	        "LogPath": "/var/lib/docker/containers/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676/7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676-json.log",
	        "Name": "/no-preload-875594",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-875594:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-875594",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7deefeed7d706251e89e7d05143766bbaeeb5dd4cef2384ff91186656aae8676",
	                "LowerDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/838dead28b8cc9aab45322b83a05d005735e451912ea523da4284e810b6d4ac3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-875594",
	                "Source": "/var/lib/docker/volumes/no-preload-875594/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-875594",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-875594",
	                "name.minikube.sigs.k8s.io": "no-preload-875594",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c840b7f762ffa22847a67e64e4eaa6c32604d14e49589caabb3e94fc1fcf13ca",
	            "SandboxKey": "/var/run/docker/netns/c840b7f762ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-875594": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e8428f34acce91e05870737f0d974d96eb8dd52ccd21658be3061a5635b849ca",
	                    "EndpointID": "61fefbdcd24068a6a0c9f13853091570a4e9a9a21d385e2d4c1d29b78db8ed0f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "0e:de:63:8a:15:c4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-875594",
	                        "7deefeed7d70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-875594 -n no-preload-875594
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-875594 -n no-preload-875594: exit status 2 (463.589692ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-875594 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-875594 logs -n 25: (1.491597821s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-768607                                                                                                                                                                                                               │ disable-driver-mounts-768607 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ stop    │ -p no-preload-875594 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-285966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p embed-certs-285966 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-086314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p old-k8s-version-086314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p no-preload-875594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p no-preload-875594 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-285966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-585688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-585688 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-585688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ old-k8s-version-086314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p old-k8s-version-086314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ image   │ no-preload-875594 image list --format=json                                                                                                                                                                                                    │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p no-preload-875594 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ embed-certs-285966 image list --format=json                                                                                                                                                                                                   │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p embed-certs-285966 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:58:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:58:16.814764  326324 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:58:16.815091  326324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:16.815104  326324 out.go:374] Setting ErrFile to fd 2...
	I0111 07:58:16.815110  326324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:16.815339  326324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:58:16.815856  326324 out.go:368] Setting JSON to false
	I0111 07:58:16.817178  326324 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2445,"bootTime":1768115852,"procs":381,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:58:16.817250  326324 start.go:143] virtualization: kvm guest
	I0111 07:58:16.818975  326324 out.go:179] * [newest-cni-613901] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:58:16.820186  326324 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:58:16.820204  326324 notify.go:221] Checking for updates...
	I0111 07:58:16.824310  326324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:58:16.826352  326324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:16.827706  326324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:58:16.829851  326324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:58:16.832258  326324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:58:16.833851  326324 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:16.834026  326324 config.go:182] Loaded profile config "embed-certs-285966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:16.834175  326324 config.go:182] Loaded profile config "no-preload-875594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:16.834283  326324 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:58:16.865938  326324 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:58:16.866095  326324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:16.965718  326324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-11 07:58:16.953541265 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:16.965962  326324 docker.go:319] overlay module found
	I0111 07:58:16.968424  326324 out.go:179] * Using the docker driver based on user configuration
	I0111 07:58:16.969560  326324 start.go:309] selected driver: docker
	I0111 07:58:16.969575  326324 start.go:928] validating driver "docker" against <nil>
	I0111 07:58:16.969602  326324 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:58:16.970417  326324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:17.045988  326324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-11 07:58:17.031518631 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:17.046498  326324 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0111 07:58:17.046661  326324 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0111 07:58:17.047126  326324 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 07:58:17.048712  326324 out.go:179] * Using Docker driver with root privileges
	I0111 07:58:17.049780  326324 cni.go:84] Creating CNI manager for ""
	I0111 07:58:17.049871  326324 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:58:17.049886  326324 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 07:58:17.049960  326324 start.go:353] cluster config:
	{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:58:17.051237  326324 out.go:179] * Starting "newest-cni-613901" primary control-plane node in "newest-cni-613901" cluster
	I0111 07:58:17.052435  326324 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:58:17.053867  326324 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:58:17.055044  326324 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:58:17.055082  326324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:58:17.055090  326324 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:58:17.055101  326324 cache.go:65] Caching tarball of preloaded images
	I0111 07:58:17.055186  326324 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:58:17.055203  326324 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:58:17.055332  326324 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/config.json ...
	I0111 07:58:17.055362  326324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/config.json: {Name:mk70cac531cb2184da90332e0b5263855a47642d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:17.083176  326324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:58:17.083200  326324 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:58:17.083213  326324 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:58:17.083245  326324 start.go:360] acquireMachinesLock for newest-cni-613901: {Name:mk432dc96cb4574bc050bdfd86d1212df9b7472d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:58:17.083338  326324 start.go:364] duration metric: took 76.086µs to acquireMachinesLock for "newest-cni-613901"
	I0111 07:58:17.083364  326324 start.go:93] Provisioning new machine with config: &{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:58:17.083461  326324 start.go:125] createHost starting for "" (driver="docker")
	I0111 07:58:14.188953  321160 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0111 07:58:14.194914  321160 api_server.go:325] https://192.168.94.2:8444/healthz returned 200:
	ok
	I0111 07:58:14.196293  321160 api_server.go:141] control plane version: v1.35.0
	I0111 07:58:14.196333  321160 api_server.go:131] duration metric: took 1.007459811s to wait for apiserver health ...
	I0111 07:58:14.196346  321160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:58:14.201655  321160 system_pods.go:59] 8 kube-system pods found
	I0111 07:58:14.201690  321160 system_pods.go:61] "coredns-7d764666f9-9n4hp" [9e11784a-1ace-44c0-8f9e-656ed390ed43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:58:14.201701  321160 system_pods.go:61] "etcd-default-k8s-diff-port-585688" [8a1723a7-4506-4cbe-b1b5-c6f9df32979c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:58:14.201710  321160 system_pods.go:61] "kindnet-lz5hh" [035a0293-8ca4-4f7c-8cc7-33f5f35eee60] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0111 07:58:14.201719  321160 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585688" [2dd2d325-0f00-43c4-b8c3-40a121ad8236] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:58:14.201729  321160 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585688" [f8d9761a-cb04-4f60-a85c-4e5d711937e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:58:14.201737  321160 system_pods.go:61] "kube-proxy-5z6rc" [4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:58:14.201744  321160 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585688" [c22fa171-44f1-4321-b77a-43088630dfc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:58:14.201751  321160 system_pods.go:61] "storage-provisioner" [7f00c923-02d5-4873-968e-207c6eaa9567] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:58:14.201758  321160 system_pods.go:74] duration metric: took 5.404407ms to wait for pod list to return data ...
	I0111 07:58:14.201769  321160 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:58:14.204181  321160 default_sa.go:45] found service account: "default"
	I0111 07:58:14.204202  321160 default_sa.go:55] duration metric: took 2.426319ms for default service account to be created ...
	I0111 07:58:14.204212  321160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:58:14.207287  321160 system_pods.go:86] 8 kube-system pods found
	I0111 07:58:14.207316  321160 system_pods.go:89] "coredns-7d764666f9-9n4hp" [9e11784a-1ace-44c0-8f9e-656ed390ed43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:58:14.207327  321160 system_pods.go:89] "etcd-default-k8s-diff-port-585688" [8a1723a7-4506-4cbe-b1b5-c6f9df32979c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:58:14.207337  321160 system_pods.go:89] "kindnet-lz5hh" [035a0293-8ca4-4f7c-8cc7-33f5f35eee60] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0111 07:58:14.207349  321160 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585688" [2dd2d325-0f00-43c4-b8c3-40a121ad8236] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:58:14.207358  321160 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585688" [f8d9761a-cb04-4f60-a85c-4e5d711937e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:58:14.207369  321160 system_pods.go:89] "kube-proxy-5z6rc" [4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:58:14.207377  321160 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585688" [c22fa171-44f1-4321-b77a-43088630dfc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:58:14.207384  321160 system_pods.go:89] "storage-provisioner" [7f00c923-02d5-4873-968e-207c6eaa9567] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:58:14.207393  321160 system_pods.go:126] duration metric: took 3.175161ms to wait for k8s-apps to be running ...
	I0111 07:58:14.207401  321160 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:58:14.207447  321160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:14.221677  321160 system_svc.go:56] duration metric: took 14.269621ms WaitForService to wait for kubelet
	I0111 07:58:14.221702  321160 kubeadm.go:587] duration metric: took 3.210465004s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:58:14.221723  321160 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:58:14.224642  321160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:58:14.224671  321160 node_conditions.go:123] node cpu capacity is 8
	I0111 07:58:14.224689  321160 node_conditions.go:105] duration metric: took 2.958907ms to run NodePressure ...
	I0111 07:58:14.224704  321160 start.go:242] waiting for startup goroutines ...
	I0111 07:58:14.224717  321160 start.go:247] waiting for cluster config update ...
	I0111 07:58:14.224735  321160 start.go:256] writing updated cluster config ...
	I0111 07:58:14.225070  321160 ssh_runner.go:195] Run: rm -f paused
	I0111 07:58:14.229613  321160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:58:14.233461  321160 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9n4hp" in "kube-system" namespace to be "Ready" or be gone ...
	W0111 07:58:16.239197  321160 pod_ready.go:104] pod "coredns-7d764666f9-9n4hp" is not "Ready", error: <nil>
	W0111 07:58:18.248959  321160 pod_ready.go:104] pod "coredns-7d764666f9-9n4hp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 11 07:57:43 no-preload-875594 crio[570]: time="2026-01-11T07:57:43.169321907Z" level=info msg="Started container" PID=1763 containerID=5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper id=0ece006e-8469-41cf-82c4-f6435128eff9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f19ec2fc838b20302373dc9424073e7cf3296fd833154a46c4b0e1893316676e
	Jan 11 07:57:43 no-preload-875594 crio[570]: time="2026-01-11T07:57:43.213762927Z" level=info msg="Removing container: 7de5159f3687df4074a89fdcea0b5e3baf17cacaa1b560ed886a26db786b9bca" id=c3407b8b-c219-490f-b41c-87b094a98ff4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:57:43 no-preload-875594 crio[570]: time="2026-01-11T07:57:43.222009812Z" level=info msg="Removed container 7de5159f3687df4074a89fdcea0b5e3baf17cacaa1b560ed886a26db786b9bca: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper" id=c3407b8b-c219-490f-b41c-87b094a98ff4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.24799573Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4b091fd0-2e1b-4ff0-83cc-3beac9eb7f8f name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.24893956Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eb725990-f981-4e95-95c2-f78259f8cf54 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.249988929Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=365330ca-4be5-46dd-bbb6-8935314e1298 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.25012987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.25518514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.255346981Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d854b0e922c36b5a6d569ec5f5f35c156bfbbc70db5a69f84701c23b2752d3d1/merged/etc/passwd: no such file or directory"
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.255371353Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d854b0e922c36b5a6d569ec5f5f35c156bfbbc70db5a69f84701c23b2752d3d1/merged/etc/group: no such file or directory"
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.256292377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.280227004Z" level=info msg="Created container 8e14bd67002a296324708f9d3d673b05eb3191d9b342a923b346b535dc8eb241: kube-system/storage-provisioner/storage-provisioner" id=365330ca-4be5-46dd-bbb6-8935314e1298 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.280848899Z" level=info msg="Starting container: 8e14bd67002a296324708f9d3d673b05eb3191d9b342a923b346b535dc8eb241" id=ffd13d6a-08d5-442a-8e02-466daf34054b name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:57:56 no-preload-875594 crio[570]: time="2026-01-11T07:57:56.282640881Z" level=info msg="Started container" PID=1777 containerID=8e14bd67002a296324708f9d3d673b05eb3191d9b342a923b346b535dc8eb241 description=kube-system/storage-provisioner/storage-provisioner id=ffd13d6a-08d5-442a-8e02-466daf34054b name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c2215136938b327b572ac2cd035193ab713997e1cea06e20720a622c1e75525
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.119877462Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a1dc5002-5b55-4a52-862b-b5654c477260 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.120838134Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1bd909b5-e2a9-4309-8f1c-5542ce912ad4 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.121902013Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper" id=b56c7a0e-4adb-4bd7-9288-56fc199f96ed name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.122036939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.128295726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.128744185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.15225624Z" level=info msg="Created container a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper" id=b56c7a0e-4adb-4bd7-9288-56fc199f96ed name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.152840636Z" level=info msg="Starting container: a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d" id=0b669824-be6a-4255-b729-8587a3a8732c name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.154526275Z" level=info msg="Started container" PID=1816 containerID=a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper id=0b669824-be6a-4255-b729-8587a3a8732c name=/runtime.v1.RuntimeService/StartContainer sandboxID=f19ec2fc838b20302373dc9424073e7cf3296fd833154a46c4b0e1893316676e
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.274852676Z" level=info msg="Removing container: 5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3" id=cddbbc57-ce3d-43fe-a427-b00d4723b0a8 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:05 no-preload-875594 crio[570]: time="2026-01-11T07:58:05.284409068Z" level=info msg="Removed container 5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph/dashboard-metrics-scraper" id=cddbbc57-ce3d-43fe-a427-b00d4723b0a8 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a8c9b7ad1d71f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   3                   f19ec2fc838b2       dashboard-metrics-scraper-867fb5f87b-944ph   kubernetes-dashboard
	8e14bd67002a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   3c2215136938b       storage-provisioner                          kube-system
	c8f540e41b187       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   d4a027b93023d       kubernetes-dashboard-b84665fb8-rwbwb         kubernetes-dashboard
	1f5652c6ae96c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   ddba5528c98b5       busybox                                      default
	3af5364625761       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     0                   44e5c4674cfe5       coredns-7d764666f9-gw9b6                     kube-system
	c3df2fd79f570       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           54 seconds ago      Running             kube-proxy                  0                   f4552eea3b3ec       kube-proxy-49b4x                             kube-system
	a1c19f090e2e3       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   b9a8c685f1859       kindnet-p7w57                                kube-system
	a15969534d074       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   3c2215136938b       storage-provisioner                          kube-system
	86dd9362e4e98       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           57 seconds ago      Running             etcd                        0                   a60701e0c2c26       etcd-no-preload-875594                       kube-system
	04186244a4d8a       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           57 seconds ago      Running             kube-apiserver              0                   f9a4f746d9d2a       kube-apiserver-no-preload-875594             kube-system
	a56589dc7dd4d       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           57 seconds ago      Running             kube-controller-manager     0                   d2be7c153d3e5       kube-controller-manager-no-preload-875594    kube-system
	8198831067c65       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           57 seconds ago      Running             kube-scheduler              0                   937507af235b6       kube-scheduler-no-preload-875594             kube-system
	
	
	==> coredns [3af5364625761c8a158b701e137ef416af7b328b6f090ab3089466d0fc65554d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] 127.0.0.1:36671 - 33943 "HINFO IN 429470345494285788.1098832509106181362. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.434608777s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-875594
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-875594
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=no-preload-875594
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_56_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-875594
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:58:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:57:55 +0000   Sun, 11 Jan 2026 07:56:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:57:55 +0000   Sun, 11 Jan 2026 07:56:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:57:55 +0000   Sun, 11 Jan 2026 07:56:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:57:55 +0000   Sun, 11 Jan 2026 07:56:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-875594
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                47c67b25-2f7f-4c80-9308-fd505f15ded0
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-gw9b6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-875594                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-p7w57                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-875594              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-875594     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-49b4x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-875594              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-944ph    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-rwbwb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node no-preload-875594 event: Registered Node no-preload-875594 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-875594 event: Registered Node no-preload-875594 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [86dd9362e4e980669196db9a6fc9ed3859ae025e5b431399b21fd17fef766db2] <==
	{"level":"info","ts":"2026-01-11T07:57:22.696942Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T07:57:22.697002Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T07:57:22.697215Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2026-01-11T07:57:22.696552Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:22.697450Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T07:57:22.697510Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T07:57:22.697990Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:23.686757Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:23.686845Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:23.687009Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:23.687061Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:57:23.687081Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:23.687753Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:23.687776Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:57:23.687818Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:23.687835Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:23.689049Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-875594 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:57:23.689079Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:23.689071Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:23.689255Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:23.689282Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:23.691400Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:57:23.691461Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:57:23.694229Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:57:23.694234Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 07:58:20 up 40 min,  0 user,  load average: 3.66, 3.50, 2.52
	Linux no-preload-875594 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a1c19f090e2e328cdfb978691c2690587d4e415bd9b0a43536b9b302bf85ac31] <==
	I0111 07:57:25.752935       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:57:25.753331       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 07:57:25.753574       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:57:25.753612       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:57:25.753641       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:57:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:57:25.960233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:57:25.960264       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:57:25.960289       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:57:25.960477       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:57:26.352995       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:57:26.353027       1 metrics.go:72] Registering metrics
	I0111 07:57:26.353089       1 controller.go:711] "Syncing nftables rules"
	I0111 07:57:35.960932       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:57:35.961064       1 main.go:301] handling current node
	I0111 07:57:45.961882       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:57:45.961932       1 main.go:301] handling current node
	I0111 07:57:55.961022       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:57:55.961072       1 main.go:301] handling current node
	I0111 07:58:05.961000       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:58:05.961034       1 main.go:301] handling current node
	I0111 07:58:15.963924       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 07:58:15.963961       1 main.go:301] handling current node
	
	
	==> kube-apiserver [04186244a4d8af4b1401de6299129517961f7e404cad46c3f8e49cbbbe78c312] <==
	I0111 07:57:24.755417       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 07:57:24.755430       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 07:57:24.755715       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:24.755778       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 07:57:24.755845       1 aggregator.go:187] initial CRD sync complete...
	I0111 07:57:24.755857       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 07:57:24.755865       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 07:57:24.755872       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:57:24.756071       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 07:57:24.763067       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 07:57:24.770064       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 07:57:24.788326       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0111 07:57:24.817295       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 07:57:25.055920       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:57:25.087277       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:57:25.115422       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:57:25.124014       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:57:25.132613       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:57:25.167214       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.43.79"}
	I0111 07:57:25.177410       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.31.35"}
	I0111 07:57:25.659486       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:57:28.358694       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:57:28.409456       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:57:28.409456       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:57:28.609367       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a56589dc7dd4d5b92389fcdb2961a8b65557ca9359492ffc4fb9f790b05e4dd4] <==
	I0111 07:57:27.918247       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.918411       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.918487       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.918732       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.919254       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.919557       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.920027       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.920399       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.920728       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:27.920973       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.921161       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.921232       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.921601       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.921962       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.925658       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.925678       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.927548       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.929529       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.934938       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.935288       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:27.938240       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:28.016999       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:28.017083       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:57:28.017094       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:57:28.020887       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [c3df2fd79f57075ef7e6564c31e44d2dbd21643d54ef7484250e39a61d8830e3] <==
	I0111 07:57:25.527137       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:57:25.591132       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:25.691639       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:25.691681       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 07:57:25.691793       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:57:25.715644       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:57:25.715722       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:57:25.722403       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:57:25.723375       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:57:25.723424       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:25.725406       1 config.go:309] "Starting node config controller"
	I0111 07:57:25.725426       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:57:25.725440       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:57:25.725457       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:57:25.725485       1 config.go:200] "Starting service config controller"
	I0111 07:57:25.725492       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:57:25.725855       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:57:25.725875       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:57:25.825623       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:57:25.825643       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:57:25.826502       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:57:25.827683       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8198831067c65d7e7b8f32225ee8852e7d6229fee77dc266d93b4d5a687f2cd3] <==
	I0111 07:57:23.090088       1 serving.go:386] Generated self-signed cert in-memory
	W0111 07:57:24.686548       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 07:57:24.686588       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0111 07:57:24.686602       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 07:57:24.686611       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 07:57:24.714400       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 07:57:24.714489       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:24.716434       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 07:57:24.716467       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:24.716625       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 07:57:24.716700       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 07:57:24.817618       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:57:42 no-preload-875594 kubelet[718]: E0111 07:57:42.209124     718 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-875594" containerName="kube-apiserver"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: E0111 07:57:43.119878     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: I0111 07:57:43.119914     718 scope.go:122] "RemoveContainer" containerID="7de5159f3687df4074a89fdcea0b5e3baf17cacaa1b560ed886a26db786b9bca"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: I0111 07:57:43.212632     718 scope.go:122] "RemoveContainer" containerID="7de5159f3687df4074a89fdcea0b5e3baf17cacaa1b560ed886a26db786b9bca"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: E0111 07:57:43.212979     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: I0111 07:57:43.213017     718 scope.go:122] "RemoveContainer" containerID="5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3"
	Jan 11 07:57:43 no-preload-875594 kubelet[718]: E0111 07:57:43.213276     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-944ph_kubernetes-dashboard(8566b7d9-2a44-457d-b1a7-776b0275faa6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" podUID="8566b7d9-2a44-457d-b1a7-776b0275faa6"
	Jan 11 07:57:51 no-preload-875594 kubelet[718]: E0111 07:57:51.601304     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:51 no-preload-875594 kubelet[718]: I0111 07:57:51.601360     718 scope.go:122] "RemoveContainer" containerID="5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3"
	Jan 11 07:57:51 no-preload-875594 kubelet[718]: E0111 07:57:51.601623     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-944ph_kubernetes-dashboard(8566b7d9-2a44-457d-b1a7-776b0275faa6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" podUID="8566b7d9-2a44-457d-b1a7-776b0275faa6"
	Jan 11 07:57:56 no-preload-875594 kubelet[718]: I0111 07:57:56.247480     718 scope.go:122] "RemoveContainer" containerID="a15969534d074b462177067db2228455e3f65e99df3930f8fb7ec7f2f41160f8"
	Jan 11 07:58:00 no-preload-875594 kubelet[718]: E0111 07:58:00.763048     718 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gw9b6" containerName="coredns"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: E0111 07:58:05.119198     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: I0111 07:58:05.119249     718 scope.go:122] "RemoveContainer" containerID="5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: I0111 07:58:05.273496     718 scope.go:122] "RemoveContainer" containerID="5e61dc34967c58857374ba5caf2dcc50449512e3ab4d58f44031a265354250b3"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: E0111 07:58:05.273711     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: I0111 07:58:05.273753     718 scope.go:122] "RemoveContainer" containerID="a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d"
	Jan 11 07:58:05 no-preload-875594 kubelet[718]: E0111 07:58:05.273981     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-944ph_kubernetes-dashboard(8566b7d9-2a44-457d-b1a7-776b0275faa6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" podUID="8566b7d9-2a44-457d-b1a7-776b0275faa6"
	Jan 11 07:58:11 no-preload-875594 kubelet[718]: E0111 07:58:11.600990     718 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:11 no-preload-875594 kubelet[718]: I0111 07:58:11.601040     718 scope.go:122] "RemoveContainer" containerID="a8c9b7ad1d71fbba8c61c0c3d563b75c6c90a4509aeb87ca901fe8c8c58e528d"
	Jan 11 07:58:11 no-preload-875594 kubelet[718]: E0111 07:58:11.601258     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-944ph_kubernetes-dashboard(8566b7d9-2a44-457d-b1a7-776b0275faa6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-944ph" podUID="8566b7d9-2a44-457d-b1a7-776b0275faa6"
	Jan 11 07:58:14 no-preload-875594 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:58:14 no-preload-875594 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:58:14 no-preload-875594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 07:58:14 no-preload-875594 systemd[1]: kubelet.service: Consumed 1.735s CPU time.
	
	
	==> kubernetes-dashboard [c8f540e41b1874f8da0335702431dc675c4957ca96add1e1ff08f9e41f526ffa] <==
	2026/01/11 07:57:35 Starting overwatch
	2026/01/11 07:57:35 Using namespace: kubernetes-dashboard
	2026/01/11 07:57:35 Using in-cluster config to connect to apiserver
	2026/01/11 07:57:35 Using secret token for csrf signing
	2026/01/11 07:57:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 07:57:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 07:57:35 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 07:57:35 Generating JWE encryption key
	2026/01/11 07:57:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 07:57:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 07:57:35 Initializing JWE encryption key from synchronized object
	2026/01/11 07:57:35 Creating in-cluster Sidecar client
	2026/01/11 07:57:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:57:35 Serving insecurely on HTTP port: 9090
	2026/01/11 07:58:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8e14bd67002a296324708f9d3d673b05eb3191d9b342a923b346b535dc8eb241] <==
	I0111 07:57:56.295160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:57:56.302763       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:57:56.302834       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 07:57:56.305022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:57:59.760787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:04.021575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:07.620474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:10.674008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:13.698671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:13.704905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:58:13.705088       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:58:13.705271       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-875594_1f3c10ca-609d-41d5-9c8d-a80c9e794387!
	I0111 07:58:13.705419       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"adcc6b4b-1fe9-448b-b2d9-cf067e532a3b", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-875594_1f3c10ca-609d-41d5-9c8d-a80c9e794387 became leader
	W0111 07:58:13.708189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:13.717213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:58:13.805733       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-875594_1f3c10ca-609d-41d5-9c8d-a80c9e794387!
	W0111 07:58:15.720094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:15.724439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:17.730400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:17.738550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:19.741936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:19.746877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a15969534d074b462177067db2228455e3f65e99df3930f8fb7ec7f2f41160f8] <==
	I0111 07:57:25.499240       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 07:57:55.501349       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-875594 -n no-preload-875594
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-875594 -n no-preload-875594: exit status 2 (388.198583ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-875594 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-285966 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-285966 --alsologtostderr -v=1: exit status 80 (1.904729598s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-285966 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:58:14.453887  325194 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:58:14.454200  325194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:14.454212  325194 out.go:374] Setting ErrFile to fd 2...
	I0111 07:58:14.454220  325194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:14.454442  325194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:58:14.454721  325194 out.go:368] Setting JSON to false
	I0111 07:58:14.454743  325194 mustload.go:66] Loading cluster: embed-certs-285966
	I0111 07:58:14.455147  325194 config.go:182] Loaded profile config "embed-certs-285966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:14.455589  325194 cli_runner.go:164] Run: docker container inspect embed-certs-285966 --format={{.State.Status}}
	I0111 07:58:14.474493  325194 host.go:66] Checking if "embed-certs-285966" exists ...
	I0111 07:58:14.474734  325194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:14.536399  325194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-11 07:58:14.524872785 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:14.537157  325194 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-285966 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 07:58:14.539231  325194 out.go:179] * Pausing node embed-certs-285966 ... 
	I0111 07:58:14.540477  325194 host.go:66] Checking if "embed-certs-285966" exists ...
	I0111 07:58:14.540713  325194 ssh_runner.go:195] Run: systemctl --version
	I0111 07:58:14.540747  325194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-285966
	I0111 07:58:14.561710  325194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/embed-certs-285966/id_rsa Username:docker}
	I0111 07:58:14.669752  325194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:14.682616  325194 pause.go:52] kubelet running: true
	I0111 07:58:14.682680  325194 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:58:14.851492  325194 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:58:14.851585  325194 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:58:14.921624  325194 cri.go:96] found id: "f600ae35f38886b304a554efeb2062cc07675a31d6b67c1422079f15c4424867"
	I0111 07:58:14.921649  325194 cri.go:96] found id: "0009fd589117fa2571af77390024cd0e97da87ad32579cb2345f04fa42a054c0"
	I0111 07:58:14.921653  325194 cri.go:96] found id: "120338a03655e1de2f7130f7e1a1392b48ae76e5908b1a38031ab21d34bce59e"
	I0111 07:58:14.921657  325194 cri.go:96] found id: "620c41d7294111dc05b872982ad046e2a32a88a21c31868000a510d357363b23"
	I0111 07:58:14.921659  325194 cri.go:96] found id: "df0529eb33f7e4986b1917bf3aad1944366f3a1e84f1d7f15c408faa6e03f4f0"
	I0111 07:58:14.921663  325194 cri.go:96] found id: "549b7752722376433db3102bf0433676cbe020d474e238af58fe8f3ce20e7cad"
	I0111 07:58:14.921676  325194 cri.go:96] found id: "922cd28382c3c7abe22799ee282d7e8cdbde7bdf06d9c7ad7752450641f086fe"
	I0111 07:58:14.921680  325194 cri.go:96] found id: "1c93a7283300525c6fa1c9642d2a153684e61da58f29768b0be4390bc81386eb"
	I0111 07:58:14.921683  325194 cri.go:96] found id: "99ff4271fc175e5592daa8e5b70c899fc351f028abff121118ae6f6deb18a8ef"
	I0111 07:58:14.921688  325194 cri.go:96] found id: "fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e"
	I0111 07:58:14.921692  325194 cri.go:96] found id: "bf8be33f60e0451ad75aa007d76c844836b6d2655aeef7c05e71bdd2dddb58bb"
	I0111 07:58:14.921695  325194 cri.go:96] found id: ""
	I0111 07:58:14.921737  325194 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:58:14.933543  325194 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:14Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:58:15.250156  325194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:15.263956  325194 pause.go:52] kubelet running: false
	I0111 07:58:15.264016  325194 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:58:15.419164  325194 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:58:15.419279  325194 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:58:15.492905  325194 cri.go:96] found id: "f600ae35f38886b304a554efeb2062cc07675a31d6b67c1422079f15c4424867"
	I0111 07:58:15.492932  325194 cri.go:96] found id: "0009fd589117fa2571af77390024cd0e97da87ad32579cb2345f04fa42a054c0"
	I0111 07:58:15.492938  325194 cri.go:96] found id: "120338a03655e1de2f7130f7e1a1392b48ae76e5908b1a38031ab21d34bce59e"
	I0111 07:58:15.492942  325194 cri.go:96] found id: "620c41d7294111dc05b872982ad046e2a32a88a21c31868000a510d357363b23"
	I0111 07:58:15.492946  325194 cri.go:96] found id: "df0529eb33f7e4986b1917bf3aad1944366f3a1e84f1d7f15c408faa6e03f4f0"
	I0111 07:58:15.492951  325194 cri.go:96] found id: "549b7752722376433db3102bf0433676cbe020d474e238af58fe8f3ce20e7cad"
	I0111 07:58:15.492954  325194 cri.go:96] found id: "922cd28382c3c7abe22799ee282d7e8cdbde7bdf06d9c7ad7752450641f086fe"
	I0111 07:58:15.492958  325194 cri.go:96] found id: "1c93a7283300525c6fa1c9642d2a153684e61da58f29768b0be4390bc81386eb"
	I0111 07:58:15.492962  325194 cri.go:96] found id: "99ff4271fc175e5592daa8e5b70c899fc351f028abff121118ae6f6deb18a8ef"
	I0111 07:58:15.492971  325194 cri.go:96] found id: "fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e"
	I0111 07:58:15.492975  325194 cri.go:96] found id: "bf8be33f60e0451ad75aa007d76c844836b6d2655aeef7c05e71bdd2dddb58bb"
	I0111 07:58:15.492979  325194 cri.go:96] found id: ""
	I0111 07:58:15.493020  325194 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:58:16.000673  325194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:16.016931  325194 pause.go:52] kubelet running: false
	I0111 07:58:16.016996  325194 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:58:16.190884  325194 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:58:16.190963  325194 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:58:16.272913  325194 cri.go:96] found id: "f600ae35f38886b304a554efeb2062cc07675a31d6b67c1422079f15c4424867"
	I0111 07:58:16.272944  325194 cri.go:96] found id: "0009fd589117fa2571af77390024cd0e97da87ad32579cb2345f04fa42a054c0"
	I0111 07:58:16.272951  325194 cri.go:96] found id: "120338a03655e1de2f7130f7e1a1392b48ae76e5908b1a38031ab21d34bce59e"
	I0111 07:58:16.272957  325194 cri.go:96] found id: "620c41d7294111dc05b872982ad046e2a32a88a21c31868000a510d357363b23"
	I0111 07:58:16.272962  325194 cri.go:96] found id: "df0529eb33f7e4986b1917bf3aad1944366f3a1e84f1d7f15c408faa6e03f4f0"
	I0111 07:58:16.272968  325194 cri.go:96] found id: "549b7752722376433db3102bf0433676cbe020d474e238af58fe8f3ce20e7cad"
	I0111 07:58:16.272973  325194 cri.go:96] found id: "922cd28382c3c7abe22799ee282d7e8cdbde7bdf06d9c7ad7752450641f086fe"
	I0111 07:58:16.272978  325194 cri.go:96] found id: "1c93a7283300525c6fa1c9642d2a153684e61da58f29768b0be4390bc81386eb"
	I0111 07:58:16.272983  325194 cri.go:96] found id: "99ff4271fc175e5592daa8e5b70c899fc351f028abff121118ae6f6deb18a8ef"
	I0111 07:58:16.272992  325194 cri.go:96] found id: "fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e"
	I0111 07:58:16.272997  325194 cri.go:96] found id: "bf8be33f60e0451ad75aa007d76c844836b6d2655aeef7c05e71bdd2dddb58bb"
	I0111 07:58:16.273002  325194 cri.go:96] found id: ""
	I0111 07:58:16.273051  325194 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:58:16.292738  325194 out.go:203] 
	W0111 07:58:16.294171  325194 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:58:16.294205  325194 out.go:285] * 
	* 
	W0111 07:58:16.296332  325194 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:58:16.297546  325194 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-285966 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-285966
helpers_test.go:244: (dbg) docker inspect embed-certs-285966:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0",
	        "Created": "2026-01-11T07:56:18.369460444Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 314795,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:57:20.104841224Z",
	            "FinishedAt": "2026-01-11T07:57:19.081756728Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/hosts",
	        "LogPath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0-json.log",
	        "Name": "/embed-certs-285966",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-285966:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-285966",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0",
	                "LowerDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-285966",
	                "Source": "/var/lib/docker/volumes/embed-certs-285966/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-285966",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-285966",
	                "name.minikube.sigs.k8s.io": "embed-certs-285966",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "646418215a969ec59283fc108ba59968d25a3483e4dacdb2b6629e71b86e44ec",
	            "SandboxKey": "/var/run/docker/netns/646418215a96",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-285966": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "381973c00b1f29aaeef3a0b61a9772036cd7ea9cc64cf06cd8af7e4ef146b220",
	                    "EndpointID": "8e7a4a404edc32bdafe18cda07d0c7baf7b0973723859e33c5a5a597a9a7ebab",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "2e:79:68:16:08:df",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-285966",
	                        "97da5d389969"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-285966 -n embed-certs-285966
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-285966 -n embed-certs-285966: exit status 2 (409.941132ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-285966 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-285966 logs -n 25: (1.59852786s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-768607                                                                                                                                                                                                               │ disable-driver-mounts-768607 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ stop    │ -p no-preload-875594 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-285966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p embed-certs-285966 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-086314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p old-k8s-version-086314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p no-preload-875594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p no-preload-875594 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-285966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-585688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-585688 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-585688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ old-k8s-version-086314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p old-k8s-version-086314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ image   │ no-preload-875594 image list --format=json                                                                                                                                                                                                    │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p no-preload-875594 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ embed-certs-285966 image list --format=json                                                                                                                                                                                                   │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p embed-certs-285966 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:58:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:58:16.814764  326324 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:58:16.815091  326324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:16.815104  326324 out.go:374] Setting ErrFile to fd 2...
	I0111 07:58:16.815110  326324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:16.815339  326324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:58:16.815856  326324 out.go:368] Setting JSON to false
	I0111 07:58:16.817178  326324 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2445,"bootTime":1768115852,"procs":381,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:58:16.817250  326324 start.go:143] virtualization: kvm guest
	I0111 07:58:16.818975  326324 out.go:179] * [newest-cni-613901] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:58:16.820186  326324 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:58:16.820204  326324 notify.go:221] Checking for updates...
	I0111 07:58:16.824310  326324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:58:16.826352  326324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:16.827706  326324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	
	
	==> CRI-O <==
	Jan 11 07:57:47 embed-certs-285966 crio[569]: time="2026-01-11T07:57:47.961709754Z" level=info msg="Started container" PID=1789 containerID=e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper id=90fc1e7a-e172-4819-9e6d-667826527ee7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=791aed063c4835893ce28d3a69c70b53f43e6e122c6db27f3a92dd21ce2618a3
	Jan 11 07:57:48 embed-certs-285966 crio[569]: time="2026-01-11T07:57:48.031719919Z" level=info msg="Removing container: 8ce538b6ca6c0b4df31b736aa2644f6edd98aff1278f6dcf5e981eafe02c6d2b" id=001185ed-009d-4837-9a7f-4b9a0b4c18ba name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:57:48 embed-certs-285966 crio[569]: time="2026-01-11T07:57:48.040828607Z" level=info msg="Removed container 8ce538b6ca6c0b4df31b736aa2644f6edd98aff1278f6dcf5e981eafe02c6d2b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper" id=001185ed-009d-4837-9a7f-4b9a0b4c18ba name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.063292077Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=db88e11b-3e70-4208-83a3-e5421c1340fd name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.064497775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=679a0b30-ad22-471c-8e76-419731913039 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.065683572Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a6c5ed38-c5fd-46e4-a776-eb5b800f68b5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.065849752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.07027515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.070461027Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9084ed72879e2ad40d6a7f2698d4a65f309eb977d3c42af764bfa194601ee6f4/merged/etc/passwd: no such file or directory"
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.070492109Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9084ed72879e2ad40d6a7f2698d4a65f309eb977d3c42af764bfa194601ee6f4/merged/etc/group: no such file or directory"
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.070759989Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.099774905Z" level=info msg="Created container f600ae35f38886b304a554efeb2062cc07675a31d6b67c1422079f15c4424867: kube-system/storage-provisioner/storage-provisioner" id=a6c5ed38-c5fd-46e4-a776-eb5b800f68b5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.100411818Z" level=info msg="Starting container: f600ae35f38886b304a554efeb2062cc07675a31d6b67c1422079f15c4424867" id=34dfce99-f163-4db5-9e9c-e16a988a2a88 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.102280067Z" level=info msg="Started container" PID=1803 containerID=f600ae35f38886b304a554efeb2062cc07675a31d6b67c1422079f15c4424867 description=kube-system/storage-provisioner/storage-provisioner id=34dfce99-f163-4db5-9e9c-e16a988a2a88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dfc987a29c99e5b46c1fd8ce660a665f0960f72c5fc4a68cb6c27e22def91cc8
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.921088306Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ffa294aa-2b96-4e6e-abaf-e53786790d7f name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.922307817Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=05256c1d-e787-4901-84ba-6290d484e3f7 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.923623779Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper" id=890fddea-93d6-4592-8cf8-ef3a026a53cf name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.923780233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.930920832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.931765205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.962536903Z" level=info msg="Created container fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper" id=890fddea-93d6-4592-8cf8-ef3a026a53cf name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.963311347Z" level=info msg="Starting container: fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e" id=e3f9892a-d2ce-4aea-9026-74865b8b724d name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.965605486Z" level=info msg="Started container" PID=1842 containerID=fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper id=e3f9892a-d2ce-4aea-9026-74865b8b724d name=/runtime.v1.RuntimeService/StartContainer sandboxID=791aed063c4835893ce28d3a69c70b53f43e6e122c6db27f3a92dd21ce2618a3
	Jan 11 07:58:13 embed-certs-285966 crio[569]: time="2026-01-11T07:58:13.104995257Z" level=info msg="Removing container: e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596" id=9991e3e7-ef9d-40e2-8314-c530cd534188 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:13 embed-certs-285966 crio[569]: time="2026-01-11T07:58:13.116456108Z" level=info msg="Removed container e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper" id=9991e3e7-ef9d-40e2-8314-c530cd534188 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	fdb98f2e54e54       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   791aed063c483       dashboard-metrics-scraper-867fb5f87b-blplb   kubernetes-dashboard
	f600ae35f3888       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   dfc987a29c99e       storage-provisioner                          kube-system
	bf8be33f60e04       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   9a2384fcee142       kubernetes-dashboard-b84665fb8-crv8j         kubernetes-dashboard
	0009fd589117f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           48 seconds ago      Running             coredns                     0                   629ac76cdbf2a       coredns-7d764666f9-2ffx6                     kube-system
	80c7f49420120       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   7c2e5c6182757       busybox                                      default
	120338a03655e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   dfc987a29c99e       storage-provisioner                          kube-system
	620c41d729411       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           48 seconds ago      Running             kube-proxy                  0                   8317dd44c7571       kube-proxy-2slrz                             kube-system
	df0529eb33f7e       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           48 seconds ago      Running             kindnet-cni                 0                   22a9ede88038a       kindnet-jqj7z                                kube-system
	549b775272237       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           50 seconds ago      Running             kube-scheduler              0                   a6a35dc643226       kube-scheduler-embed-certs-285966            kube-system
	922cd28382c3c       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           50 seconds ago      Running             etcd                        0                   83fb4529af8a3       etcd-embed-certs-285966                      kube-system
	1c93a72833005       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           50 seconds ago      Running             kube-controller-manager     0                   5fb7fecf3031a       kube-controller-manager-embed-certs-285966   kube-system
	99ff4271fc175       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           50 seconds ago      Running             kube-apiserver              0                   ae5f472ad51db       kube-apiserver-embed-certs-285966            kube-system
	
	
	==> coredns [0009fd589117fa2571af77390024cd0e97da87ad32579cb2345f04fa42a054c0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43679 - 6897 "HINFO IN 957526106081891227.1085550982097879218. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.01964181s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-285966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-285966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=embed-certs-285966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_56_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:56:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-285966
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:58:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:57:59 +0000   Sun, 11 Jan 2026 07:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:57:59 +0000   Sun, 11 Jan 2026 07:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:57:59 +0000   Sun, 11 Jan 2026 07:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:57:59 +0000   Sun, 11 Jan 2026 07:56:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-285966
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                0f057153-04bf-4365-8e21-27be17dcfc7b
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-7d764666f9-2ffx6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-embed-certs-285966                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-jqj7z                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-embed-certs-285966             250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-embed-certs-285966    200m (2%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-2slrz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-embed-certs-285966             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-blplb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-crv8j          0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  103s  node-controller  Node embed-certs-285966 event: Registered Node embed-certs-285966 in Controller
	  Normal  RegisteredNode  45s   node-controller  Node embed-certs-285966 event: Registered Node embed-certs-285966 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [922cd28382c3c7abe22799ee282d7e8cdbde7bdf06d9c7ad7752450641f086fe] <==
	{"level":"info","ts":"2026-01-11T07:57:27.489366Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T07:57:27.489378Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T07:57:27.489366Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T07:57:27.489412Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T07:57:27.489495Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:27.489501Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T07:57:27.489513Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:27.579637Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:27.579732Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:27.579840Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:27.579869Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:57:27.579889Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:27.580693Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:27.580728Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:57:27.580751Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:27.580759Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:27.581580Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-285966 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:57:27.581601Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:27.581588Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:27.581849Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:27.581877Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:27.585144Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:57:27.585476Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:57:27.589003Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T07:57:27.589483Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:58:17 up 40 min,  0 user,  load average: 3.37, 3.44, 2.50
	Linux embed-certs-285966 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [df0529eb33f7e4986b1917bf3aad1944366f3a1e84f1d7f15c408faa6e03f4f0] <==
	I0111 07:57:29.576474       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:57:29.576724       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 07:57:29.576933       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:57:29.576959       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:57:29.576984       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:57:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:57:29.874457       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:57:29.874656       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:57:29.874790       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:57:29.971861       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:57:30.270719       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:57:30.270765       1 metrics.go:72] Registering metrics
	I0111 07:57:30.270875       1 controller.go:711] "Syncing nftables rules"
	I0111 07:57:39.874057       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 07:57:39.874093       1 main.go:301] handling current node
	I0111 07:57:49.875325       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 07:57:49.875374       1 main.go:301] handling current node
	I0111 07:57:59.874872       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 07:57:59.874904       1 main.go:301] handling current node
	I0111 07:58:09.880821       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 07:58:09.880868       1 main.go:301] handling current node
	
	
	==> kube-apiserver [99ff4271fc175e5592daa8e5b70c899fc351f028abff121118ae6f6deb18a8ef] <==
	I0111 07:57:28.908394       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:57:28.908418       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 07:57:28.908448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 07:57:28.908685       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 07:57:28.908927       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 07:57:28.908961       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:28.915791       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0111 07:57:28.916019       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 07:57:28.918002       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0111 07:57:28.918656       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E0111 07:57:28.921769       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 07:57:28.927837       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:28.927899       1 policy_source.go:248] refreshing policies
	I0111 07:57:28.955073       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:57:29.041184       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:57:29.345854       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:57:29.432533       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:57:29.467116       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:57:29.481544       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:57:29.534078       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.83.74"}
	I0111 07:57:29.548442       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.31.79"}
	I0111 07:57:29.813885       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:57:32.637033       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:57:32.687050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:57:32.792902       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1c93a7283300525c6fa1c9642d2a153684e61da58f29768b0be4390bc81386eb] <==
	I0111 07:57:32.094087       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.094152       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.094251       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0111 07:57:32.094360       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.094384       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.094717       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.094888       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.095301       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.095760       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.095873       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.095887       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-285966"
	I0111 07:57:32.095897       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.095929       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0111 07:57:32.098412       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.098642       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.099318       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.099857       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.100741       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.100869       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.115388       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:32.192157       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.192186       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:57:32.192192       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:57:32.218232       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.694566       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [620c41d7294111dc05b872982ad046e2a32a88a21c31868000a510d357363b23] <==
	I0111 07:57:29.415281       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:57:29.510381       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:29.611704       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:29.611744       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 07:57:29.611868       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:57:29.640473       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:57:29.640699       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:57:29.649051       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:57:29.649762       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:57:29.649784       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:29.651636       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:57:29.651657       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:57:29.651685       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:57:29.651690       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:57:29.651708       1 config.go:309] "Starting node config controller"
	I0111 07:57:29.651714       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:57:29.652617       1 config.go:200] "Starting service config controller"
	I0111 07:57:29.652660       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:57:29.751989       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:57:29.752030       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:57:29.752119       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 07:57:29.753232       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [549b7752722376433db3102bf0433676cbe020d474e238af58fe8f3ce20e7cad] <==
	I0111 07:57:27.728549       1 serving.go:386] Generated self-signed cert in-memory
	W0111 07:57:28.867028       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 07:57:28.867064       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 07:57:28.867076       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 07:57:28.867086       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 07:57:28.898316       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 07:57:28.898413       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:28.901433       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 07:57:28.901551       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 07:57:28.904047       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:28.901573       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 07:57:29.004781       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:57:47 embed-certs-285966 kubelet[734]: E0111 07:57:47.025687     734 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-285966" containerName="etcd"
	Jan 11 07:57:47 embed-certs-285966 kubelet[734]: E0111 07:57:47.919847     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:47 embed-certs-285966 kubelet[734]: I0111 07:57:47.919885     734 scope.go:122] "RemoveContainer" containerID="8ce538b6ca6c0b4df31b736aa2644f6edd98aff1278f6dcf5e981eafe02c6d2b"
	Jan 11 07:57:48 embed-certs-285966 kubelet[734]: I0111 07:57:48.030419     734 scope.go:122] "RemoveContainer" containerID="8ce538b6ca6c0b4df31b736aa2644f6edd98aff1278f6dcf5e981eafe02c6d2b"
	Jan 11 07:57:48 embed-certs-285966 kubelet[734]: E0111 07:57:48.030668     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:48 embed-certs-285966 kubelet[734]: I0111 07:57:48.030703     734 scope.go:122] "RemoveContainer" containerID="e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596"
	Jan 11 07:57:48 embed-certs-285966 kubelet[734]: E0111 07:57:48.030957     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-blplb_kubernetes-dashboard(144f86f0-8eed-45f1-b401-7261a9de3696)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" podUID="144f86f0-8eed-45f1-b401-7261a9de3696"
	Jan 11 07:57:53 embed-certs-285966 kubelet[734]: E0111 07:57:53.742340     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:53 embed-certs-285966 kubelet[734]: I0111 07:57:53.742382     734 scope.go:122] "RemoveContainer" containerID="e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596"
	Jan 11 07:57:53 embed-certs-285966 kubelet[734]: E0111 07:57:53.742547     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-blplb_kubernetes-dashboard(144f86f0-8eed-45f1-b401-7261a9de3696)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" podUID="144f86f0-8eed-45f1-b401-7261a9de3696"
	Jan 11 07:58:00 embed-certs-285966 kubelet[734]: I0111 07:58:00.062749     734 scope.go:122] "RemoveContainer" containerID="120338a03655e1de2f7130f7e1a1392b48ae76e5908b1a38031ab21d34bce59e"
	Jan 11 07:58:00 embed-certs-285966 kubelet[734]: E0111 07:58:00.969700     734 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-2ffx6" containerName="coredns"
	Jan 11 07:58:12 embed-certs-285966 kubelet[734]: E0111 07:58:12.920355     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:12 embed-certs-285966 kubelet[734]: I0111 07:58:12.920411     734 scope.go:122] "RemoveContainer" containerID="e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596"
	Jan 11 07:58:13 embed-certs-285966 kubelet[734]: I0111 07:58:13.102559     734 scope.go:122] "RemoveContainer" containerID="e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596"
	Jan 11 07:58:13 embed-certs-285966 kubelet[734]: E0111 07:58:13.102886     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:13 embed-certs-285966 kubelet[734]: I0111 07:58:13.102930     734 scope.go:122] "RemoveContainer" containerID="fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e"
	Jan 11 07:58:13 embed-certs-285966 kubelet[734]: E0111 07:58:13.103156     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-blplb_kubernetes-dashboard(144f86f0-8eed-45f1-b401-7261a9de3696)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" podUID="144f86f0-8eed-45f1-b401-7261a9de3696"
	Jan 11 07:58:14 embed-certs-285966 kubelet[734]: E0111 07:58:14.107507     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:14 embed-certs-285966 kubelet[734]: I0111 07:58:14.107541     734 scope.go:122] "RemoveContainer" containerID="fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e"
	Jan 11 07:58:14 embed-certs-285966 kubelet[734]: E0111 07:58:14.107743     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-blplb_kubernetes-dashboard(144f86f0-8eed-45f1-b401-7261a9de3696)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" podUID="144f86f0-8eed-45f1-b401-7261a9de3696"
	Jan 11 07:58:14 embed-certs-285966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:58:14 embed-certs-285966 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:58:14 embed-certs-285966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 07:58:14 embed-certs-285966 systemd[1]: kubelet.service: Consumed 1.694s CPU time.
	
	
	==> kubernetes-dashboard [bf8be33f60e0451ad75aa007d76c844836b6d2655aeef7c05e71bdd2dddb58bb] <==
	2026/01/11 07:57:39 Using namespace: kubernetes-dashboard
	2026/01/11 07:57:39 Using in-cluster config to connect to apiserver
	2026/01/11 07:57:39 Using secret token for csrf signing
	2026/01/11 07:57:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 07:57:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 07:57:39 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 07:57:39 Generating JWE encryption key
	2026/01/11 07:57:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 07:57:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 07:57:39 Initializing JWE encryption key from synchronized object
	2026/01/11 07:57:39 Creating in-cluster Sidecar client
	2026/01/11 07:57:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:57:39 Serving insecurely on HTTP port: 9090
	2026/01/11 07:58:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:57:39 Starting overwatch
	
	
	==> storage-provisioner [120338a03655e1de2f7130f7e1a1392b48ae76e5908b1a38031ab21d34bce59e] <==
	I0111 07:57:29.386487       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 07:57:59.391541       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f600ae35f38886b304a554efeb2062cc07675a31d6b67c1422079f15c4424867] <==
	I0111 07:58:00.114523       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:58:00.121764       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:58:00.121815       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 07:58:00.123784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:03.579037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:07.839393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:11.438340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:14.494257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:17.517916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:17.526323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:58:17.526571       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:58:17.526857       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba85867d-09e8-4b62-9236-97b9a3a7c673", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-285966_7c4e025c-369a-4612-8e64-4f3377916b41 became leader
	I0111 07:58:17.527074       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-285966_7c4e025c-369a-4612-8e64-4f3377916b41!
	W0111 07:58:17.537405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:17.542134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:58:17.628489       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-285966_7c4e025c-369a-4612-8e64-4f3377916b41!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-285966 -n embed-certs-285966
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-285966 -n embed-certs-285966: exit status 2 (440.982733ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-285966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-285966
helpers_test.go:244: (dbg) docker inspect embed-certs-285966:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0",
	        "Created": "2026-01-11T07:56:18.369460444Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 314795,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:57:20.104841224Z",
	            "FinishedAt": "2026-01-11T07:57:19.081756728Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/hosts",
	        "LogPath": "/var/lib/docker/containers/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0/97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0-json.log",
	        "Name": "/embed-certs-285966",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-285966:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-285966",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "97da5d38996927edb1a657a3a9511f58b9f254917ccec49ae4cc338d6e0853c0",
	                "LowerDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/245cbe138f6501f3caed5f3c5a13b7be8d168bd7e263f71051881c6d647d7c9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-285966",
	                "Source": "/var/lib/docker/volumes/embed-certs-285966/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-285966",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-285966",
	                "name.minikube.sigs.k8s.io": "embed-certs-285966",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "646418215a969ec59283fc108ba59968d25a3483e4dacdb2b6629e71b86e44ec",
	            "SandboxKey": "/var/run/docker/netns/646418215a96",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-285966": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "381973c00b1f29aaeef3a0b61a9772036cd7ea9cc64cf06cd8af7e4ef146b220",
	                    "EndpointID": "8e7a4a404edc32bdafe18cda07d0c7baf7b0973723859e33c5a5a597a9a7ebab",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "2e:79:68:16:08:df",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-285966",
	                        "97da5d389969"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-285966 -n embed-certs-285966
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-285966 -n embed-certs-285966: exit status 2 (438.29505ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-285966 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-285966 logs -n 25: (3.055473164s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-768607                                                                                                                                                                                                               │ disable-driver-mounts-768607 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:56 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-875594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │                     │
	│ stop    │ -p no-preload-875594 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:56 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-285966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p embed-certs-285966 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-086314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p old-k8s-version-086314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ addons  │ enable dashboard -p no-preload-875594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p no-preload-875594 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-285966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:57 UTC │
	│ start   │ -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-585688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-585688 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-585688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ old-k8s-version-086314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p old-k8s-version-086314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ image   │ no-preload-875594 image list --format=json                                                                                                                                                                                                    │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p no-preload-875594 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-875594            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ embed-certs-285966 image list --format=json                                                                                                                                                                                                   │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p embed-certs-285966 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-285966           │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314       │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:58:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:58:16.814764  326324 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:58:16.815091  326324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:16.815104  326324 out.go:374] Setting ErrFile to fd 2...
	I0111 07:58:16.815110  326324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:16.815339  326324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:58:16.815856  326324 out.go:368] Setting JSON to false
	I0111 07:58:16.817178  326324 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2445,"bootTime":1768115852,"procs":381,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:58:16.817250  326324 start.go:143] virtualization: kvm guest
	I0111 07:58:16.818975  326324 out.go:179] * [newest-cni-613901] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:58:16.820186  326324 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:58:16.820204  326324 notify.go:221] Checking for updates...
	I0111 07:58:16.824310  326324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:58:16.826352  326324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:16.827706  326324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:58:16.829851  326324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:58:16.832258  326324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:58:16.833851  326324 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:16.834026  326324 config.go:182] Loaded profile config "embed-certs-285966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:16.834175  326324 config.go:182] Loaded profile config "no-preload-875594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:16.834283  326324 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:58:16.865938  326324 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:58:16.866095  326324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:16.965718  326324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-11 07:58:16.953541265 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:16.965962  326324 docker.go:319] overlay module found
	I0111 07:58:16.968424  326324 out.go:179] * Using the docker driver based on user configuration
	I0111 07:58:16.969560  326324 start.go:309] selected driver: docker
	I0111 07:58:16.969575  326324 start.go:928] validating driver "docker" against <nil>
	I0111 07:58:16.969602  326324 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:58:16.970417  326324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:17.045988  326324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-11 07:58:17.031518631 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:17.046498  326324 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0111 07:58:17.046661  326324 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0111 07:58:17.047126  326324 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 07:58:17.048712  326324 out.go:179] * Using Docker driver with root privileges
	I0111 07:58:17.049780  326324 cni.go:84] Creating CNI manager for ""
	I0111 07:58:17.049871  326324 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:58:17.049886  326324 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 07:58:17.049960  326324 start.go:353] cluster config:
	{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:58:17.051237  326324 out.go:179] * Starting "newest-cni-613901" primary control-plane node in "newest-cni-613901" cluster
	I0111 07:58:17.052435  326324 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:58:17.053867  326324 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:58:17.055044  326324 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:58:17.055082  326324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:58:17.055090  326324 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:58:17.055101  326324 cache.go:65] Caching tarball of preloaded images
	I0111 07:58:17.055186  326324 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:58:17.055203  326324 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:58:17.055332  326324 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/config.json ...
	I0111 07:58:17.055362  326324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/config.json: {Name:mk70cac531cb2184da90332e0b5263855a47642d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:17.083176  326324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:58:17.083200  326324 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:58:17.083213  326324 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:58:17.083245  326324 start.go:360] acquireMachinesLock for newest-cni-613901: {Name:mk432dc96cb4574bc050bdfd86d1212df9b7472d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:58:17.083338  326324 start.go:364] duration metric: took 76.086µs to acquireMachinesLock for "newest-cni-613901"
	I0111 07:58:17.083364  326324 start.go:93] Provisioning new machine with config: &{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:58:17.083461  326324 start.go:125] createHost starting for "" (driver="docker")
	I0111 07:58:14.188953  321160 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0111 07:58:14.194914  321160 api_server.go:325] https://192.168.94.2:8444/healthz returned 200:
	ok
	I0111 07:58:14.196293  321160 api_server.go:141] control plane version: v1.35.0
	I0111 07:58:14.196333  321160 api_server.go:131] duration metric: took 1.007459811s to wait for apiserver health ...
	I0111 07:58:14.196346  321160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:58:14.201655  321160 system_pods.go:59] 8 kube-system pods found
	I0111 07:58:14.201690  321160 system_pods.go:61] "coredns-7d764666f9-9n4hp" [9e11784a-1ace-44c0-8f9e-656ed390ed43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:58:14.201701  321160 system_pods.go:61] "etcd-default-k8s-diff-port-585688" [8a1723a7-4506-4cbe-b1b5-c6f9df32979c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:58:14.201710  321160 system_pods.go:61] "kindnet-lz5hh" [035a0293-8ca4-4f7c-8cc7-33f5f35eee60] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0111 07:58:14.201719  321160 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585688" [2dd2d325-0f00-43c4-b8c3-40a121ad8236] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:58:14.201729  321160 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585688" [f8d9761a-cb04-4f60-a85c-4e5d711937e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:58:14.201737  321160 system_pods.go:61] "kube-proxy-5z6rc" [4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:58:14.201744  321160 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585688" [c22fa171-44f1-4321-b77a-43088630dfc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:58:14.201751  321160 system_pods.go:61] "storage-provisioner" [7f00c923-02d5-4873-968e-207c6eaa9567] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:58:14.201758  321160 system_pods.go:74] duration metric: took 5.404407ms to wait for pod list to return data ...
	I0111 07:58:14.201769  321160 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:58:14.204181  321160 default_sa.go:45] found service account: "default"
	I0111 07:58:14.204202  321160 default_sa.go:55] duration metric: took 2.426319ms for default service account to be created ...
	I0111 07:58:14.204212  321160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 07:58:14.207287  321160 system_pods.go:86] 8 kube-system pods found
	I0111 07:58:14.207316  321160 system_pods.go:89] "coredns-7d764666f9-9n4hp" [9e11784a-1ace-44c0-8f9e-656ed390ed43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 07:58:14.207327  321160 system_pods.go:89] "etcd-default-k8s-diff-port-585688" [8a1723a7-4506-4cbe-b1b5-c6f9df32979c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:58:14.207337  321160 system_pods.go:89] "kindnet-lz5hh" [035a0293-8ca4-4f7c-8cc7-33f5f35eee60] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0111 07:58:14.207349  321160 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585688" [2dd2d325-0f00-43c4-b8c3-40a121ad8236] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:58:14.207358  321160 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585688" [f8d9761a-cb04-4f60-a85c-4e5d711937e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:58:14.207369  321160 system_pods.go:89] "kube-proxy-5z6rc" [4f0dd5f1-a24c-4f95-8434-9ff6e40b85e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:58:14.207377  321160 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585688" [c22fa171-44f1-4321-b77a-43088630dfc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:58:14.207384  321160 system_pods.go:89] "storage-provisioner" [7f00c923-02d5-4873-968e-207c6eaa9567] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 07:58:14.207393  321160 system_pods.go:126] duration metric: took 3.175161ms to wait for k8s-apps to be running ...
	I0111 07:58:14.207401  321160 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 07:58:14.207447  321160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:58:14.221677  321160 system_svc.go:56] duration metric: took 14.269621ms WaitForService to wait for kubelet
	I0111 07:58:14.221702  321160 kubeadm.go:587] duration metric: took 3.210465004s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 07:58:14.221723  321160 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:58:14.224642  321160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:58:14.224671  321160 node_conditions.go:123] node cpu capacity is 8
	I0111 07:58:14.224689  321160 node_conditions.go:105] duration metric: took 2.958907ms to run NodePressure ...
	I0111 07:58:14.224704  321160 start.go:242] waiting for startup goroutines ...
	I0111 07:58:14.224717  321160 start.go:247] waiting for cluster config update ...
	I0111 07:58:14.224735  321160 start.go:256] writing updated cluster config ...
	I0111 07:58:14.225070  321160 ssh_runner.go:195] Run: rm -f paused
	I0111 07:58:14.229613  321160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 07:58:14.233461  321160 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9n4hp" in "kube-system" namespace to be "Ready" or be gone ...
	W0111 07:58:16.239197  321160 pod_ready.go:104] pod "coredns-7d764666f9-9n4hp" is not "Ready", error: <nil>
	W0111 07:58:18.248959  321160 pod_ready.go:104] pod "coredns-7d764666f9-9n4hp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 11 07:57:47 embed-certs-285966 crio[569]: time="2026-01-11T07:57:47.961709754Z" level=info msg="Started container" PID=1789 containerID=e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper id=90fc1e7a-e172-4819-9e6d-667826527ee7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=791aed063c4835893ce28d3a69c70b53f43e6e122c6db27f3a92dd21ce2618a3
	Jan 11 07:57:48 embed-certs-285966 crio[569]: time="2026-01-11T07:57:48.031719919Z" level=info msg="Removing container: 8ce538b6ca6c0b4df31b736aa2644f6edd98aff1278f6dcf5e981eafe02c6d2b" id=001185ed-009d-4837-9a7f-4b9a0b4c18ba name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:57:48 embed-certs-285966 crio[569]: time="2026-01-11T07:57:48.040828607Z" level=info msg="Removed container 8ce538b6ca6c0b4df31b736aa2644f6edd98aff1278f6dcf5e981eafe02c6d2b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper" id=001185ed-009d-4837-9a7f-4b9a0b4c18ba name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.063292077Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=db88e11b-3e70-4208-83a3-e5421c1340fd name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.064497775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=679a0b30-ad22-471c-8e76-419731913039 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.065683572Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a6c5ed38-c5fd-46e4-a776-eb5b800f68b5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.065849752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.07027515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.070461027Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9084ed72879e2ad40d6a7f2698d4a65f309eb977d3c42af764bfa194601ee6f4/merged/etc/passwd: no such file or directory"
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.070492109Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9084ed72879e2ad40d6a7f2698d4a65f309eb977d3c42af764bfa194601ee6f4/merged/etc/group: no such file or directory"
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.070759989Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.099774905Z" level=info msg="Created container f600ae35f38886b304a554efeb2062cc07675a31d6b67c1422079f15c4424867: kube-system/storage-provisioner/storage-provisioner" id=a6c5ed38-c5fd-46e4-a776-eb5b800f68b5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.100411818Z" level=info msg="Starting container: f600ae35f38886b304a554efeb2062cc07675a31d6b67c1422079f15c4424867" id=34dfce99-f163-4db5-9e9c-e16a988a2a88 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:00 embed-certs-285966 crio[569]: time="2026-01-11T07:58:00.102280067Z" level=info msg="Started container" PID=1803 containerID=f600ae35f38886b304a554efeb2062cc07675a31d6b67c1422079f15c4424867 description=kube-system/storage-provisioner/storage-provisioner id=34dfce99-f163-4db5-9e9c-e16a988a2a88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dfc987a29c99e5b46c1fd8ce660a665f0960f72c5fc4a68cb6c27e22def91cc8
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.921088306Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ffa294aa-2b96-4e6e-abaf-e53786790d7f name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.922307817Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=05256c1d-e787-4901-84ba-6290d484e3f7 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.923623779Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper" id=890fddea-93d6-4592-8cf8-ef3a026a53cf name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.923780233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.930920832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.931765205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.962536903Z" level=info msg="Created container fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper" id=890fddea-93d6-4592-8cf8-ef3a026a53cf name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.963311347Z" level=info msg="Starting container: fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e" id=e3f9892a-d2ce-4aea-9026-74865b8b724d name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:12 embed-certs-285966 crio[569]: time="2026-01-11T07:58:12.965605486Z" level=info msg="Started container" PID=1842 containerID=fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper id=e3f9892a-d2ce-4aea-9026-74865b8b724d name=/runtime.v1.RuntimeService/StartContainer sandboxID=791aed063c4835893ce28d3a69c70b53f43e6e122c6db27f3a92dd21ce2618a3
	Jan 11 07:58:13 embed-certs-285966 crio[569]: time="2026-01-11T07:58:13.104995257Z" level=info msg="Removing container: e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596" id=9991e3e7-ef9d-40e2-8314-c530cd534188 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:13 embed-certs-285966 crio[569]: time="2026-01-11T07:58:13.116456108Z" level=info msg="Removed container e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb/dashboard-metrics-scraper" id=9991e3e7-ef9d-40e2-8314-c530cd534188 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	fdb98f2e54e54       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   791aed063c483       dashboard-metrics-scraper-867fb5f87b-blplb   kubernetes-dashboard
	f600ae35f3888       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   dfc987a29c99e       storage-provisioner                          kube-system
	bf8be33f60e04       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   9a2384fcee142       kubernetes-dashboard-b84665fb8-crv8j         kubernetes-dashboard
	0009fd589117f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           50 seconds ago      Running             coredns                     0                   629ac76cdbf2a       coredns-7d764666f9-2ffx6                     kube-system
	80c7f49420120       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   7c2e5c6182757       busybox                                      default
	120338a03655e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   dfc987a29c99e       storage-provisioner                          kube-system
	620c41d729411       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           50 seconds ago      Running             kube-proxy                  0                   8317dd44c7571       kube-proxy-2slrz                             kube-system
	df0529eb33f7e       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           50 seconds ago      Running             kindnet-cni                 0                   22a9ede88038a       kindnet-jqj7z                                kube-system
	549b775272237       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           52 seconds ago      Running             kube-scheduler              0                   a6a35dc643226       kube-scheduler-embed-certs-285966            kube-system
	922cd28382c3c       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           52 seconds ago      Running             etcd                        0                   83fb4529af8a3       etcd-embed-certs-285966                      kube-system
	1c93a72833005       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           52 seconds ago      Running             kube-controller-manager     0                   5fb7fecf3031a       kube-controller-manager-embed-certs-285966   kube-system
	99ff4271fc175       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           52 seconds ago      Running             kube-apiserver              0                   ae5f472ad51db       kube-apiserver-embed-certs-285966            kube-system
	
	
	==> coredns [0009fd589117fa2571af77390024cd0e97da87ad32579cb2345f04fa42a054c0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43679 - 6897 "HINFO IN 957526106081891227.1085550982097879218. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.01964181s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-285966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-285966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=embed-certs-285966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_56_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:56:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-285966
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:58:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:57:59 +0000   Sun, 11 Jan 2026 07:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:57:59 +0000   Sun, 11 Jan 2026 07:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:57:59 +0000   Sun, 11 Jan 2026 07:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:57:59 +0000   Sun, 11 Jan 2026 07:56:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-285966
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                0f057153-04bf-4365-8e21-27be17dcfc7b
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-2ffx6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-embed-certs-285966                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-jqj7z                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-embed-certs-285966             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-285966    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-2slrz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-embed-certs-285966             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-blplb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-crv8j          0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node embed-certs-285966 event: Registered Node embed-certs-285966 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node embed-certs-285966 event: Registered Node embed-certs-285966 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [922cd28382c3c7abe22799ee282d7e8cdbde7bdf06d9c7ad7752450641f086fe] <==
	{"level":"info","ts":"2026-01-11T07:57:27.489366Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T07:57:27.489378Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T07:57:27.489366Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T07:57:27.489412Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T07:57:27.489495Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:27.489501Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T07:57:27.489513Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T07:57:27.579637Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:27.579732Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:27.579840Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T07:57:27.579869Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:57:27.579889Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:27.580693Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:27.580728Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:57:27.580751Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:27.580759Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T07:57:27.581580Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-285966 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:57:27.581601Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:27.581588Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:57:27.581849Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:27.581877Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:57:27.585144Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:57:27.585476Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:57:27.589003Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T07:57:27.589483Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:58:20 up 40 min,  0 user,  load average: 3.66, 3.50, 2.52
	Linux embed-certs-285966 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [df0529eb33f7e4986b1917bf3aad1944366f3a1e84f1d7f15c408faa6e03f4f0] <==
	I0111 07:57:29.576474       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:57:29.576724       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 07:57:29.576933       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:57:29.576959       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:57:29.576984       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:57:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:57:29.874457       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:57:29.874656       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:57:29.874790       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:57:29.971861       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:57:30.270719       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:57:30.270765       1 metrics.go:72] Registering metrics
	I0111 07:57:30.270875       1 controller.go:711] "Syncing nftables rules"
	I0111 07:57:39.874057       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 07:57:39.874093       1 main.go:301] handling current node
	I0111 07:57:49.875325       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 07:57:49.875374       1 main.go:301] handling current node
	I0111 07:57:59.874872       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 07:57:59.874904       1 main.go:301] handling current node
	I0111 07:58:09.880821       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 07:58:09.880868       1 main.go:301] handling current node
	I0111 07:58:19.878885       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 07:58:19.879049       1 main.go:301] handling current node
	
	
	==> kube-apiserver [99ff4271fc175e5592daa8e5b70c899fc351f028abff121118ae6f6deb18a8ef] <==
	I0111 07:57:28.908394       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:57:28.908418       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 07:57:28.908448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 07:57:28.908685       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 07:57:28.908927       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 07:57:28.908961       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:28.915791       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0111 07:57:28.916019       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 07:57:28.918002       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0111 07:57:28.918656       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E0111 07:57:28.921769       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 07:57:28.927837       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:28.927899       1 policy_source.go:248] refreshing policies
	I0111 07:57:28.955073       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:57:29.041184       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:57:29.345854       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:57:29.432533       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:57:29.467116       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:57:29.481544       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:57:29.534078       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.83.74"}
	I0111 07:57:29.548442       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.31.79"}
	I0111 07:57:29.813885       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:57:32.637033       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:57:32.687050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:57:32.792902       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1c93a7283300525c6fa1c9642d2a153684e61da58f29768b0be4390bc81386eb] <==
	I0111 07:57:32.094087       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.094152       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.094251       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0111 07:57:32.094360       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.094384       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.094717       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.094888       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.095301       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.095760       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.095873       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.095887       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-285966"
	I0111 07:57:32.095897       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.095929       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0111 07:57:32.098412       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.098642       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.099318       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.099857       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.100741       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.100869       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.115388       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:32.192157       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.192186       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:57:32.192192       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:57:32.218232       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:32.694566       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [620c41d7294111dc05b872982ad046e2a32a88a21c31868000a510d357363b23] <==
	I0111 07:57:29.415281       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:57:29.510381       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:29.611704       1 shared_informer.go:377] "Caches are synced"
	I0111 07:57:29.611744       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 07:57:29.611868       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:57:29.640473       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:57:29.640699       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:57:29.649051       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:57:29.649762       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:57:29.649784       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:29.651636       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:57:29.651657       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:57:29.651685       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:57:29.651690       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:57:29.651708       1 config.go:309] "Starting node config controller"
	I0111 07:57:29.651714       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:57:29.652617       1 config.go:200] "Starting service config controller"
	I0111 07:57:29.652660       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:57:29.751989       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:57:29.752030       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:57:29.752119       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 07:57:29.753232       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [549b7752722376433db3102bf0433676cbe020d474e238af58fe8f3ce20e7cad] <==
	I0111 07:57:27.728549       1 serving.go:386] Generated self-signed cert in-memory
	W0111 07:57:28.867028       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 07:57:28.867064       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 07:57:28.867076       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 07:57:28.867086       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 07:57:28.898316       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 07:57:28.898413       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:57:28.901433       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 07:57:28.901551       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 07:57:28.904047       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:57:28.901573       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 07:57:29.004781       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:57:47 embed-certs-285966 kubelet[734]: E0111 07:57:47.025687     734 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-285966" containerName="etcd"
	Jan 11 07:57:47 embed-certs-285966 kubelet[734]: E0111 07:57:47.919847     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:47 embed-certs-285966 kubelet[734]: I0111 07:57:47.919885     734 scope.go:122] "RemoveContainer" containerID="8ce538b6ca6c0b4df31b736aa2644f6edd98aff1278f6dcf5e981eafe02c6d2b"
	Jan 11 07:57:48 embed-certs-285966 kubelet[734]: I0111 07:57:48.030419     734 scope.go:122] "RemoveContainer" containerID="8ce538b6ca6c0b4df31b736aa2644f6edd98aff1278f6dcf5e981eafe02c6d2b"
	Jan 11 07:57:48 embed-certs-285966 kubelet[734]: E0111 07:57:48.030668     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:48 embed-certs-285966 kubelet[734]: I0111 07:57:48.030703     734 scope.go:122] "RemoveContainer" containerID="e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596"
	Jan 11 07:57:48 embed-certs-285966 kubelet[734]: E0111 07:57:48.030957     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-blplb_kubernetes-dashboard(144f86f0-8eed-45f1-b401-7261a9de3696)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" podUID="144f86f0-8eed-45f1-b401-7261a9de3696"
	Jan 11 07:57:53 embed-certs-285966 kubelet[734]: E0111 07:57:53.742340     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:57:53 embed-certs-285966 kubelet[734]: I0111 07:57:53.742382     734 scope.go:122] "RemoveContainer" containerID="e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596"
	Jan 11 07:57:53 embed-certs-285966 kubelet[734]: E0111 07:57:53.742547     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-blplb_kubernetes-dashboard(144f86f0-8eed-45f1-b401-7261a9de3696)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" podUID="144f86f0-8eed-45f1-b401-7261a9de3696"
	Jan 11 07:58:00 embed-certs-285966 kubelet[734]: I0111 07:58:00.062749     734 scope.go:122] "RemoveContainer" containerID="120338a03655e1de2f7130f7e1a1392b48ae76e5908b1a38031ab21d34bce59e"
	Jan 11 07:58:00 embed-certs-285966 kubelet[734]: E0111 07:58:00.969700     734 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-2ffx6" containerName="coredns"
	Jan 11 07:58:12 embed-certs-285966 kubelet[734]: E0111 07:58:12.920355     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:12 embed-certs-285966 kubelet[734]: I0111 07:58:12.920411     734 scope.go:122] "RemoveContainer" containerID="e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596"
	Jan 11 07:58:13 embed-certs-285966 kubelet[734]: I0111 07:58:13.102559     734 scope.go:122] "RemoveContainer" containerID="e68fec00e5d776a19524cb8271c8c68327eb564209e886604af4a0740b32c596"
	Jan 11 07:58:13 embed-certs-285966 kubelet[734]: E0111 07:58:13.102886     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:13 embed-certs-285966 kubelet[734]: I0111 07:58:13.102930     734 scope.go:122] "RemoveContainer" containerID="fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e"
	Jan 11 07:58:13 embed-certs-285966 kubelet[734]: E0111 07:58:13.103156     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-blplb_kubernetes-dashboard(144f86f0-8eed-45f1-b401-7261a9de3696)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" podUID="144f86f0-8eed-45f1-b401-7261a9de3696"
	Jan 11 07:58:14 embed-certs-285966 kubelet[734]: E0111 07:58:14.107507     734 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:14 embed-certs-285966 kubelet[734]: I0111 07:58:14.107541     734 scope.go:122] "RemoveContainer" containerID="fdb98f2e54e541a76a833ea05306d60550094417823aa81ced40c4445786521e"
	Jan 11 07:58:14 embed-certs-285966 kubelet[734]: E0111 07:58:14.107743     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-blplb_kubernetes-dashboard(144f86f0-8eed-45f1-b401-7261a9de3696)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-blplb" podUID="144f86f0-8eed-45f1-b401-7261a9de3696"
	Jan 11 07:58:14 embed-certs-285966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:58:14 embed-certs-285966 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:58:14 embed-certs-285966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 07:58:14 embed-certs-285966 systemd[1]: kubelet.service: Consumed 1.694s CPU time.
	
	
	==> kubernetes-dashboard [bf8be33f60e0451ad75aa007d76c844836b6d2655aeef7c05e71bdd2dddb58bb] <==
	2026/01/11 07:57:39 Using namespace: kubernetes-dashboard
	2026/01/11 07:57:39 Using in-cluster config to connect to apiserver
	2026/01/11 07:57:39 Using secret token for csrf signing
	2026/01/11 07:57:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 07:57:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 07:57:39 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 07:57:39 Generating JWE encryption key
	2026/01/11 07:57:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 07:57:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 07:57:39 Initializing JWE encryption key from synchronized object
	2026/01/11 07:57:39 Creating in-cluster Sidecar client
	2026/01/11 07:57:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:57:39 Serving insecurely on HTTP port: 9090
	2026/01/11 07:58:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:57:39 Starting overwatch
	
	
	==> storage-provisioner [120338a03655e1de2f7130f7e1a1392b48ae76e5908b1a38031ab21d34bce59e] <==
	I0111 07:57:29.386487       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 07:57:59.391541       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f600ae35f38886b304a554efeb2062cc07675a31d6b67c1422079f15c4424867] <==
	I0111 07:58:00.114523       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:58:00.121764       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:58:00.121815       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 07:58:00.123784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:03.579037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:07.839393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:11.438340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:14.494257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:17.517916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:17.526323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:58:17.526571       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:58:17.526857       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba85867d-09e8-4b62-9236-97b9a3a7c673", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-285966_7c4e025c-369a-4612-8e64-4f3377916b41 became leader
	I0111 07:58:17.527074       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-285966_7c4e025c-369a-4612-8e64-4f3377916b41!
	W0111 07:58:17.537405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:17.542134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:58:17.628489       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-285966_7c4e025c-369a-4612-8e64-4f3377916b41!
	W0111 07:58:19.547521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:19.557076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:21.562302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:21.574147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-285966 -n embed-certs-285966
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-285966 -n embed-certs-285966: exit status 2 (437.860231ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-285966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-613901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-613901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (287.000057ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:58:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-613901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-613901
helpers_test.go:244: (dbg) docker inspect newest-cni-613901:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40",
	        "Created": "2026-01-11T07:58:23.124097279Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:58:23.751573231Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/hostname",
	        "HostsPath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/hosts",
	        "LogPath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40-json.log",
	        "Name": "/newest-cni-613901",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-613901:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-613901",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40",
	                "LowerDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-613901",
	                "Source": "/var/lib/docker/volumes/newest-cni-613901/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-613901",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-613901",
	                "name.minikube.sigs.k8s.io": "newest-cni-613901",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d4b2bb12f9a9f8909940597a47c75bfa815ca9e533699eea755188240ce28174",
	            "SandboxKey": "/var/run/docker/netns/d4b2bb12f9a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-613901": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf117b97d00c25a416ee9de4a3b71e11b7933ae5b129ce6cd466ea7394700035",
	                    "EndpointID": "5fe18227d99fdecbcb63c1150099b5f6e9c8afab9538f7d55b937389f67580fd",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "46:45:db:89:1b:0b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-613901",
	                        "11841d4ece70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-613901 -n newest-cni-613901
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-613901 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-585688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-585688 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:57 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-585688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ old-k8s-version-086314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p old-k8s-version-086314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ image   │ no-preload-875594 image list --format=json                                                                                                                                                                                                    │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p no-preload-875594 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ embed-certs-285966 image list --format=json                                                                                                                                                                                                   │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p embed-certs-285966 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p no-preload-875594                                                                                                                                                                                                                          │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p embed-certs-285966                                                                                                                                                                                                                         │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p no-preload-875594                                                                                                                                                                                                                          │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-gcs-476138 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-476138        │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p embed-certs-285966                                                                                                                                                                                                                         │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p test-preload-dl-gcs-476138                                                                                                                                                                                                                 │ test-preload-dl-gcs-476138        │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-github-044639 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-044639     │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p test-preload-dl-github-044639                                                                                                                                                                                                              │ test-preload-dl-github-044639     │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-787958 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-787958 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-787958                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-787958 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-613901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:58:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:58:33.663854  332211 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:58:33.663964  332211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:33.663982  332211 out.go:374] Setting ErrFile to fd 2...
	I0111 07:58:33.663989  332211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:58:33.664269  332211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:58:33.664890  332211 out.go:368] Setting JSON to false
	I0111 07:58:33.666496  332211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2462,"bootTime":1768115852,"procs":450,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:58:33.666599  332211 start.go:143] virtualization: kvm guest
	I0111 07:58:33.668384  332211 out.go:179] * [test-preload-dl-gcs-cached-787958] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:58:33.670062  332211 notify.go:221] Checking for updates...
	I0111 07:58:33.670074  332211 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:58:33.671435  332211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:58:33.672650  332211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:33.673889  332211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:58:33.677946  332211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:58:33.679068  332211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W0111 07:58:29.240157  321160 pod_ready.go:104] pod "coredns-7d764666f9-9n4hp" is not "Ready", error: <nil>
	W0111 07:58:31.738736  321160 pod_ready.go:104] pod "coredns-7d764666f9-9n4hp" is not "Ready", error: <nil>
	W0111 07:58:33.740708  321160 pod_ready.go:104] pod "coredns-7d764666f9-9n4hp" is not "Ready", error: <nil>
	I0111 07:58:33.680675  332211 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:33.680842  332211 config.go:182] Loaded profile config "newest-cni-613901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:33.680964  332211 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:58:33.708859  332211 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:58:33.709025  332211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:33.776069  332211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2026-01-11 07:58:33.763855719 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:33.776226  332211 docker.go:319] overlay module found
	I0111 07:58:33.778030  332211 out.go:179] * Using the docker driver based on user configuration
	I0111 07:58:33.779243  332211 start.go:309] selected driver: docker
	I0111 07:58:33.779259  332211 start.go:928] validating driver "docker" against <nil>
	I0111 07:58:33.779365  332211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:58:33.843372  332211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2026-01-11 07:58:33.831660885 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:58:33.843573  332211 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:58:33.844334  332211 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0111 07:58:33.844538  332211 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 07:58:33.847174  332211 out.go:179] * Using Docker driver with root privileges
	I0111 07:58:33.848429  332211 cni.go:84] Creating CNI manager for ""
	I0111 07:58:33.848499  332211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:58:33.848511  332211 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 07:58:33.848584  332211 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-cached-787958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-gcs-cached-787958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s
Rosetta:false}
	I0111 07:58:33.850005  332211 out.go:179] * Starting "test-preload-dl-gcs-cached-787958" primary control-plane node in "test-preload-dl-gcs-cached-787958" cluster
	I0111 07:58:33.851354  332211 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:58:33.852526  332211 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:58:33.853836  332211 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I0111 07:58:33.853871  332211 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0111 07:58:33.853879  332211 cache.go:65] Caching tarball of preloaded images
	I0111 07:58:33.853933  332211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:58:33.853954  332211 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:58:33.853965  332211 cache.go:68] Finished verifying existence of preloaded tar for v1.34.0-rc.2 on crio
	I0111 07:58:33.854082  332211 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/test-preload-dl-gcs-cached-787958/config.json ...
	I0111 07:58:33.854105  332211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/test-preload-dl-gcs-cached-787958/config.json: {Name:mk86db6628d6f331b6008e361c7323c297be43c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:33.854258  332211 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I0111 07:58:33.854333  332211 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl.sha256
	I0111 07:58:33.875023  332211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:58:33.875044  332211 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 07:58:33.875128  332211 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory
	I0111 07:58:33.875145  332211 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory, skipping pull
	I0111 07:58:33.875150  332211 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in cache, skipping pull
	I0111 07:58:33.875160  332211 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 as a tarball
	I0111 07:58:33.875179  332211 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:58:33.876941  332211 out.go:179] * Download complete!
	W0111 07:58:36.239383  321160 pod_ready.go:104] pod "coredns-7d764666f9-9n4hp" is not "Ready", error: <nil>
	W0111 07:58:38.738840  321160 pod_ready.go:104] pod "coredns-7d764666f9-9n4hp" is not "Ready", error: <nil>
	I0111 07:58:38.837767  326324 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 07:58:38.837852  326324 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 07:58:38.837952  326324 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 07:58:38.838055  326324 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I0111 07:58:38.838126  326324 kubeadm.go:319] OS: Linux
	I0111 07:58:38.838188  326324 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 07:58:38.838263  326324 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 07:58:38.838340  326324 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 07:58:38.838421  326324 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 07:58:38.838501  326324 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 07:58:38.838553  326324 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 07:58:38.838599  326324 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 07:58:38.838638  326324 kubeadm.go:319] CGROUPS_IO: enabled
	I0111 07:58:38.838732  326324 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 07:58:38.838892  326324 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 07:58:38.839030  326324 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 07:58:38.839120  326324 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 07:58:38.840669  326324 out.go:252]   - Generating certificates and keys ...
	I0111 07:58:38.840741  326324 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 07:58:38.840814  326324 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 07:58:38.840872  326324 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 07:58:38.840923  326324 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 07:58:38.841009  326324 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 07:58:38.841088  326324 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 07:58:38.841179  326324 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 07:58:38.841342  326324 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-613901] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0111 07:58:38.841421  326324 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 07:58:38.841596  326324 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-613901] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0111 07:58:38.841686  326324 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 07:58:38.841771  326324 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 07:58:38.841851  326324 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 07:58:38.841939  326324 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 07:58:38.841995  326324 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 07:58:38.842067  326324 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 07:58:38.842130  326324 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 07:58:38.842192  326324 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 07:58:38.842255  326324 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 07:58:38.842327  326324 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 07:58:38.842395  326324 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 07:58:38.843552  326324 out.go:252]   - Booting up control plane ...
	I0111 07:58:38.843639  326324 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 07:58:38.843722  326324 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 07:58:38.843790  326324 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 07:58:38.843901  326324 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 07:58:38.843988  326324 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 07:58:38.844092  326324 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 07:58:38.844171  326324 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 07:58:38.844206  326324 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 07:58:38.844321  326324 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 07:58:38.844409  326324 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 07:58:38.844459  326324 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.66809ms
	I0111 07:58:38.844555  326324 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0111 07:58:38.844654  326324 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0111 07:58:38.844735  326324 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0111 07:58:38.844832  326324 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0111 07:58:38.844921  326324 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.00522997s
	I0111 07:58:38.845025  326324 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.614108786s
	I0111 07:58:38.845134  326324 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501465058s
	I0111 07:58:38.845246  326324 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 07:58:38.845363  326324 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 07:58:38.845440  326324 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 07:58:38.845689  326324 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-613901 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 07:58:38.845742  326324 kubeadm.go:319] [bootstrap-token] Using token: ncgkmv.g9tbb4xyhevpj6mx
	I0111 07:58:38.847131  326324 out.go:252]   - Configuring RBAC rules ...
	I0111 07:58:38.847227  326324 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 07:58:38.847305  326324 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 07:58:38.847463  326324 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 07:58:38.847624  326324 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 07:58:38.847753  326324 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 07:58:38.847852  326324 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 07:58:38.847969  326324 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 07:58:38.848025  326324 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 07:58:38.848091  326324 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 07:58:38.848101  326324 kubeadm.go:319] 
	I0111 07:58:38.848168  326324 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 07:58:38.848175  326324 kubeadm.go:319] 
	I0111 07:58:38.848249  326324 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 07:58:38.848256  326324 kubeadm.go:319] 
	I0111 07:58:38.848277  326324 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 07:58:38.848330  326324 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 07:58:38.848378  326324 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 07:58:38.848386  326324 kubeadm.go:319] 
	I0111 07:58:38.848439  326324 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 07:58:38.848444  326324 kubeadm.go:319] 
	I0111 07:58:38.848498  326324 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 07:58:38.848507  326324 kubeadm.go:319] 
	I0111 07:58:38.848576  326324 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 07:58:38.848648  326324 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 07:58:38.848709  326324 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 07:58:38.848715  326324 kubeadm.go:319] 
	I0111 07:58:38.848793  326324 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 07:58:38.848886  326324 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 07:58:38.848893  326324 kubeadm.go:319] 
	I0111 07:58:38.848966  326324 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ncgkmv.g9tbb4xyhevpj6mx \
	I0111 07:58:38.849101  326324 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c84f368e6b8058d9b8f5e45cbb9f009ab0e06984b3c0d5786b368fcef9c9a8fc \
	I0111 07:58:38.849125  326324 kubeadm.go:319] 	--control-plane 
	I0111 07:58:38.849131  326324 kubeadm.go:319] 
	I0111 07:58:38.849210  326324 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 07:58:38.849217  326324 kubeadm.go:319] 
	I0111 07:58:38.849300  326324 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ncgkmv.g9tbb4xyhevpj6mx \
	I0111 07:58:38.849406  326324 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c84f368e6b8058d9b8f5e45cbb9f009ab0e06984b3c0d5786b368fcef9c9a8fc 
	I0111 07:58:38.849417  326324 cni.go:84] Creating CNI manager for ""
	I0111 07:58:38.849423  326324 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:58:38.850703  326324 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 07:58:38.851957  326324 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 07:58:38.856361  326324 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0111 07:58:38.856386  326324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 07:58:38.871306  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 07:58:39.076786  326324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 07:58:39.076870  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:58:39.076891  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-613901 minikube.k8s.io/updated_at=2026_01_11T07_58_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=newest-cni-613901 minikube.k8s.io/primary=true
	I0111 07:58:39.089395  326324 ops.go:34] apiserver oom_adj: -16
	I0111 07:58:39.156848  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:58:39.656982  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:58:40.157239  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:58:40.656948  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:58:41.157600  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:58:41.657161  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:58:42.157908  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:58:42.657182  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:58:43.157570  326324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 07:58:43.225029  326324 kubeadm.go:1114] duration metric: took 4.148233975s to wait for elevateKubeSystemPrivileges
	I0111 07:58:43.225069  326324 kubeadm.go:403] duration metric: took 13.550447519s to StartCluster
	I0111 07:58:43.225091  326324 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:43.225169  326324 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:58:43.226549  326324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:58:43.226827  326324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 07:58:43.226842  326324 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:58:43.226927  326324 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:58:43.227022  326324 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-613901"
	I0111 07:58:43.227042  326324 addons.go:70] Setting default-storageclass=true in profile "newest-cni-613901"
	I0111 07:58:43.227049  326324 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-613901"
	I0111 07:58:43.227057  326324 config.go:182] Loaded profile config "newest-cni-613901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:58:43.227081  326324 host.go:66] Checking if "newest-cni-613901" exists ...
	I0111 07:58:43.227061  326324 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-613901"
	I0111 07:58:43.227474  326324 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:58:43.227603  326324 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:58:43.228441  326324 out.go:179] * Verifying Kubernetes components...
	I0111 07:58:43.229720  326324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:58:43.250712  326324 addons.go:239] Setting addon default-storageclass=true in "newest-cni-613901"
	I0111 07:58:43.250773  326324 host.go:66] Checking if "newest-cni-613901" exists ...
	I0111 07:58:43.250893  326324 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:58:43.251279  326324 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:58:43.255355  326324 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:58:43.255378  326324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:58:43.255442  326324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:58:43.280886  326324 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:58:43.280916  326324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:58:43.280978  326324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:58:43.282014  326324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:58:43.302112  326324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:58:43.319472  326324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 07:58:43.370832  326324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:58:43.397957  326324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:58:43.423631  326324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:58:43.485904  326324 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0111 07:58:43.486945  326324 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:58:43.487007  326324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:58:43.698726  326324 api_server.go:72] duration metric: took 471.844725ms to wait for apiserver process to appear ...
	I0111 07:58:43.698756  326324 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:58:43.698791  326324 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:58:43.704570  326324 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0111 07:58:43.705381  326324 api_server.go:141] control plane version: v1.35.0
	I0111 07:58:43.705425  326324 api_server.go:131] duration metric: took 6.660796ms to wait for apiserver health ...
	I0111 07:58:43.705435  326324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:58:43.706712  326324 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W0111 07:58:41.239064  321160 pod_ready.go:104] pod "coredns-7d764666f9-9n4hp" is not "Ready", error: <nil>
	W0111 07:58:43.239929  321160 pod_ready.go:104] pod "coredns-7d764666f9-9n4hp" is not "Ready", error: <nil>
	I0111 07:58:43.707989  326324 system_pods.go:59] 8 kube-system pods found
	I0111 07:58:43.708014  326324 system_pods.go:61] "coredns-7d764666f9-tp2lb" [b1be6675-385f-4ec3-98c2-fd56542d3240] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0111 07:58:43.708022  326324 system_pods.go:61] "etcd-newest-cni-613901" [d53cbc3d-297b-427f-91e8-27852d932c59] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:58:43.708031  326324 system_pods.go:61] "kindnet-pvbpb" [71b2345b-ec04-4adc-8476-37a8fdd70756] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0111 07:58:43.708039  326324 system_pods.go:61] "kube-apiserver-newest-cni-613901" [3e516895-717b-41f3-894c-228f6ec7582e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:58:43.708045  326324 system_pods.go:61] "kube-controller-manager-newest-cni-613901" [d43f31c7-0834-4aa6-8679-49bd34282b2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:58:43.708053  326324 system_pods.go:61] "kube-proxy-8hjlp" [496988d3-27d9-4ba8-b70b-f8a570aaa6aa] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:58:43.708057  326324 system_pods.go:61] "kube-scheduler-newest-cni-613901" [f2246e87-0b37-4475-aa04-ed6117f7df38] Running
	I0111 07:58:43.708065  326324 system_pods.go:61] "storage-provisioner" [f2494dd9-c249-4496-b604-95f98c228cf5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0111 07:58:43.708071  326324 system_pods.go:74] duration metric: took 2.630648ms to wait for pod list to return data ...
	I0111 07:58:43.708080  326324 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:58:43.708109  326324 addons.go:530] duration metric: took 481.190741ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0111 07:58:43.710237  326324 default_sa.go:45] found service account: "default"
	I0111 07:58:43.710256  326324 default_sa.go:55] duration metric: took 2.169661ms for default service account to be created ...
	I0111 07:58:43.710267  326324 kubeadm.go:587] duration metric: took 483.397022ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 07:58:43.710289  326324 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:58:43.712395  326324 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:58:43.712413  326324 node_conditions.go:123] node cpu capacity is 8
	I0111 07:58:43.712426  326324 node_conditions.go:105] duration metric: took 2.13244ms to run NodePressure ...
	I0111 07:58:43.712435  326324 start.go:242] waiting for startup goroutines ...
	I0111 07:58:43.990558  326324 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-613901" context rescaled to 1 replicas
	I0111 07:58:43.990598  326324 start.go:247] waiting for cluster config update ...
	I0111 07:58:43.990608  326324 start.go:256] writing updated cluster config ...
	I0111 07:58:43.990906  326324 ssh_runner.go:195] Run: rm -f paused
	I0111 07:58:44.039943  326324 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0111 07:58:44.042067  326324 out.go:179] * Done! kubectl is now configured to use "newest-cni-613901" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.844417268Z" level=info msg="Ran pod sandbox 883ccc7bdf45f6b26ae20b8cbf2df6bf58c0203197a075c5d3aa83d94e4b3c24 with infra container: kube-system/kube-proxy-8hjlp/POD" id=c088db47-a381-454a-950e-779c4a3b5175 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.845151636Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=449e914e-3158-4ee8-a1dc-20af986698ad name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.845291384Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=449e914e-3158-4ee8-a1dc-20af986698ad name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.845412752Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=449e914e-3158-4ee8-a1dc-20af986698ad name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.845494169Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=623f5ee5-f3b0-48bb-a812-7756de661781 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.846407846Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=45da0775-48cc-4fd0-99dc-a3315f587299 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.846677301Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=8fae210f-839f-4d14-8f7d-f81f3194b6da name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.846785315Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.850610584Z" level=info msg="Creating container: kube-system/kube-proxy-8hjlp/kube-proxy" id=ee1f428d-2479-4904-91ad-a61f9a1327e2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.850751723Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.855651193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.8562477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.884671734Z" level=info msg="Created container e00494d7e344bb9d1d75144ae71c71622746627083ff3e89036b64ac420065c5: kube-system/kube-proxy-8hjlp/kube-proxy" id=ee1f428d-2479-4904-91ad-a61f9a1327e2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.885360193Z" level=info msg="Starting container: e00494d7e344bb9d1d75144ae71c71622746627083ff3e89036b64ac420065c5" id=30b7de9b-916b-4bc2-b834-1d1df10fc2ae name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:43 newest-cni-613901 crio[771]: time="2026-01-11T07:58:43.88787945Z" level=info msg="Started container" PID=1575 containerID=e00494d7e344bb9d1d75144ae71c71622746627083ff3e89036b64ac420065c5 description=kube-system/kube-proxy-8hjlp/kube-proxy id=30b7de9b-916b-4bc2-b834-1d1df10fc2ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=883ccc7bdf45f6b26ae20b8cbf2df6bf58c0203197a075c5d3aa83d94e4b3c24
	Jan 11 07:58:45 newest-cni-613901 crio[771]: time="2026-01-11T07:58:45.082517085Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27" id=45da0775-48cc-4fd0-99dc-a3315f587299 name=/runtime.v1.ImageService/PullImage
	Jan 11 07:58:45 newest-cni-613901 crio[771]: time="2026-01-11T07:58:45.083276202Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=b045e504-8b20-4ff4-90b6-33b11ae55165 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:45 newest-cni-613901 crio[771]: time="2026-01-11T07:58:45.085060885Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=674238b4-855e-4061-b15d-a3d3033bd339 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:45 newest-cni-613901 crio[771]: time="2026-01-11T07:58:45.088559301Z" level=info msg="Creating container: kube-system/kindnet-pvbpb/kindnet-cni" id=a57426dc-e65c-40b2-93b5-29c7c5c2ec97 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:45 newest-cni-613901 crio[771]: time="2026-01-11T07:58:45.088671698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:45 newest-cni-613901 crio[771]: time="2026-01-11T07:58:45.093049897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:45 newest-cni-613901 crio[771]: time="2026-01-11T07:58:45.093613756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:45 newest-cni-613901 crio[771]: time="2026-01-11T07:58:45.12113363Z" level=info msg="Created container eaafd725fc0c3acae0075ba19f5f2fbc98c5ff71d7e2959e1a550ae67b360ae3: kube-system/kindnet-pvbpb/kindnet-cni" id=a57426dc-e65c-40b2-93b5-29c7c5c2ec97 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:45 newest-cni-613901 crio[771]: time="2026-01-11T07:58:45.121840453Z" level=info msg="Starting container: eaafd725fc0c3acae0075ba19f5f2fbc98c5ff71d7e2959e1a550ae67b360ae3" id=e0884b86-b6a0-4746-9b91-15aafbf0e7f5 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:45 newest-cni-613901 crio[771]: time="2026-01-11T07:58:45.123951857Z" level=info msg="Started container" PID=1838 containerID=eaafd725fc0c3acae0075ba19f5f2fbc98c5ff71d7e2959e1a550ae67b360ae3 description=kube-system/kindnet-pvbpb/kindnet-cni id=e0884b86-b6a0-4746-9b91-15aafbf0e7f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e29e307d99fcd49cde2963467cf3f96e237f4123dafa35f6f0f305a2229ab7f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	eaafd725fc0c3       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   Less than a second ago   Running             kindnet-cni               0                   2e29e307d99fc       kindnet-pvbpb                               kube-system
	e00494d7e344b       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     1 second ago             Running             kube-proxy                0                   883ccc7bdf45f       kube-proxy-8hjlp                            kube-system
	70ab3cf3bc6b2       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     11 seconds ago           Running             kube-scheduler            0                   aef2bacde694e       kube-scheduler-newest-cni-613901            kube-system
	85d3a0d0edb2c       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     11 seconds ago           Running             kube-controller-manager   0                   00fbbbffcfdd2       kube-controller-manager-newest-cni-613901   kube-system
	20532c12f3fb9       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     11 seconds ago           Running             etcd                      0                   5198a7401df72       etcd-newest-cni-613901                      kube-system
	6bfe1758b6517       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     11 seconds ago           Running             kube-apiserver            0                   a54cd4e0a60cb       kube-apiserver-newest-cni-613901            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-613901
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-613901
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=newest-cni-613901
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_58_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:58:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-613901
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:58:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:58:38 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:58:38 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:58:38 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 11 Jan 2026 07:58:38 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-613901
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                e49a8585-8f8b-41f4-a337-ff9547046d3e
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-613901                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-pvbpb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-613901             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-613901    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-8hjlp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-613901             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-613901 event: Registered Node newest-cni-613901 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [20532c12f3fb9b348589536548c5d394e44f70fa14a242b21e81df930f56f0c4] <==
	{"level":"info","ts":"2026-01-11T07:58:33.642500Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T07:58:34.631546Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-11T07:58:34.631636Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-11T07:58:34.631702Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2026-01-11T07:58:34.631717Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:58:34.631737Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:58:34.632651Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-11T07:58:34.632699Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:58:34.632766Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2026-01-11T07:58:34.632785Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-11T07:58:34.633647Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:newest-cni-613901 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:58:34.633658Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:58:34.633678Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:58:34.633655Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:58:34.634122Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:58:34.634196Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:58:34.634773Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:58:34.634887Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:58:34.634926Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T07:58:34.634973Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T07:58:34.635017Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:58:34.635105Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T07:58:34.635253Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:58:34.638000Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2026-01-11T07:58:34.638145Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:58:45 up 41 min,  0 user,  load average: 3.69, 3.53, 2.56
	Linux newest-cni-613901 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eaafd725fc0c3acae0075ba19f5f2fbc98c5ff71d7e2959e1a550ae67b360ae3] <==
	I0111 07:58:45.304135       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:58:45.304436       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0111 07:58:45.304612       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:58:45.304639       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:58:45.304664       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:58:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:58:45.509413       1 controller.go:377] "Starting controller" name="kube-network-policies"
	
	
	==> kube-apiserver [6bfe1758b65172bc1ad22bf226d597f3fe743a790dfa3a32e6944ab0f6a9e51c] <==
	I0111 07:58:35.587227       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 07:58:35.587258       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:35.588286       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:58:35.589011       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 07:58:35.592772       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:58:35.593395       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0111 07:58:35.601526       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:58:35.782094       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:58:36.491031       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0111 07:58:36.494750       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0111 07:58:36.494765       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:58:36.941971       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:58:36.978594       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:58:37.096729       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 07:58:37.102654       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0111 07:58:37.103587       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:58:37.108330       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:58:37.510374       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:58:38.240489       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:58:38.249307       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 07:58:38.256914       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0111 07:58:43.015391       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:58:43.019768       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:58:43.464604       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:58:43.512930       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [85d3a0d0edb2c629db29bffb0119c6cec559d1fc07ad3a184b2f04a686437666] <==
	I0111 07:58:42.318696       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318716       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318732       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318755       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318773       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318868       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318932       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318944       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318952       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318924       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318993       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318932       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.319081       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0111 07:58:42.318915       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318862       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.318896       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.319727       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.319168       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-613901"
	I0111 07:58:42.319961       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0111 07:58:42.322022       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:58:42.325086       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-613901" podCIDRs=["10.42.0.0/24"]
	I0111 07:58:42.420031       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:42.420074       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:58:42.420079       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:58:42.422581       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [e00494d7e344bb9d1d75144ae71c71622746627083ff3e89036b64ac420065c5] <==
	I0111 07:58:43.922480       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:58:43.988678       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:58:44.089546       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:44.089593       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0111 07:58:44.089734       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:58:44.113352       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:58:44.113418       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:58:44.119213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:58:44.119712       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:58:44.119743       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:58:44.121438       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:58:44.121481       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:58:44.121503       1 config.go:309] "Starting node config controller"
	I0111 07:58:44.121518       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:58:44.121560       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:58:44.121569       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:58:44.121572       1 config.go:200] "Starting service config controller"
	I0111 07:58:44.121649       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:58:44.221720       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:58:44.221728       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 07:58:44.221765       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:58:44.222578       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [70ab3cf3bc6b291380dd3d42da0cce092b891e07074700411d21bc5fc1c9ee5e] <==
	E0111 07:58:35.539720       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 07:58:35.539971       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 07:58:35.540183       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 07:58:35.540183       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 07:58:35.540272       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 07:58:35.540334       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 07:58:35.540392       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 07:58:35.540414       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 07:58:35.540923       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 07:58:35.540987       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 07:58:35.541002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 07:58:35.541084       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 07:58:35.541087       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 07:58:35.541129       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 07:58:35.541218       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 07:58:36.391682       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 07:58:36.433011       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0111 07:58:36.507508       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 07:58:36.569298       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 07:58:36.611327       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 07:58:36.637586       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 07:58:36.644582       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 07:58:36.781196       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 07:58:36.786908       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	I0111 07:58:38.135661       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:58:39 newest-cni-613901 kubelet[1293]: E0111 07:58:39.089711    1293 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-613901\" already exists" pod="kube-system/kube-apiserver-newest-cni-613901"
	Jan 11 07:58:39 newest-cni-613901 kubelet[1293]: E0111 07:58:39.089797    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-613901" containerName="kube-apiserver"
	Jan 11 07:58:39 newest-cni-613901 kubelet[1293]: E0111 07:58:39.090069    1293 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-613901\" already exists" pod="kube-system/kube-scheduler-newest-cni-613901"
	Jan 11 07:58:39 newest-cni-613901 kubelet[1293]: E0111 07:58:39.090149    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-613901" containerName="kube-scheduler"
	Jan 11 07:58:39 newest-cni-613901 kubelet[1293]: I0111 07:58:39.575175    1293 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-613901" podStartSLOduration=1.575153478 podStartE2EDuration="1.575153478s" podCreationTimestamp="2026-01-11 07:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:58:39.567367532 +0000 UTC m=+1.585886952" watchObservedRunningTime="2026-01-11 07:58:39.575153478 +0000 UTC m=+1.593672900"
	Jan 11 07:58:39 newest-cni-613901 kubelet[1293]: I0111 07:58:39.583389    1293 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-613901" podStartSLOduration=1.583368579 podStartE2EDuration="1.583368579s" podCreationTimestamp="2026-01-11 07:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:58:39.57535687 +0000 UTC m=+1.593876287" watchObservedRunningTime="2026-01-11 07:58:39.583368579 +0000 UTC m=+1.601888013"
	Jan 11 07:58:39 newest-cni-613901 kubelet[1293]: I0111 07:58:39.583698    1293 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-613901" podStartSLOduration=1.5836793120000001 podStartE2EDuration="1.583679312s" podCreationTimestamp="2026-01-11 07:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:58:39.583585458 +0000 UTC m=+1.602104878" watchObservedRunningTime="2026-01-11 07:58:39.583679312 +0000 UTC m=+1.602198754"
	Jan 11 07:58:40 newest-cni-613901 kubelet[1293]: E0111 07:58:40.081796    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-613901" containerName="kube-scheduler"
	Jan 11 07:58:40 newest-cni-613901 kubelet[1293]: E0111 07:58:40.081892    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-613901" containerName="kube-apiserver"
	Jan 11 07:58:40 newest-cni-613901 kubelet[1293]: E0111 07:58:40.082059    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-613901" containerName="etcd"
	Jan 11 07:58:40 newest-cni-613901 kubelet[1293]: I0111 07:58:40.094772    1293 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-613901" podStartSLOduration=4.094751288 podStartE2EDuration="4.094751288s" podCreationTimestamp="2026-01-11 07:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 07:58:39.590941996 +0000 UTC m=+1.609461418" watchObservedRunningTime="2026-01-11 07:58:40.094751288 +0000 UTC m=+2.113270708"
	Jan 11 07:58:41 newest-cni-613901 kubelet[1293]: E0111 07:58:41.083150    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-613901" containerName="kube-scheduler"
	Jan 11 07:58:41 newest-cni-613901 kubelet[1293]: E0111 07:58:41.083298    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-613901" containerName="kube-apiserver"
	Jan 11 07:58:41 newest-cni-613901 kubelet[1293]: E0111 07:58:41.601039    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-613901" containerName="etcd"
	Jan 11 07:58:42 newest-cni-613901 kubelet[1293]: E0111 07:58:42.279097    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-613901" containerName="kube-controller-manager"
	Jan 11 07:58:42 newest-cni-613901 kubelet[1293]: I0111 07:58:42.381660    1293 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 11 07:58:42 newest-cni-613901 kubelet[1293]: I0111 07:58:42.382337    1293 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 11 07:58:43 newest-cni-613901 kubelet[1293]: I0111 07:58:43.590232    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/71b2345b-ec04-4adc-8476-37a8fdd70756-cni-cfg\") pod \"kindnet-pvbpb\" (UID: \"71b2345b-ec04-4adc-8476-37a8fdd70756\") " pod="kube-system/kindnet-pvbpb"
	Jan 11 07:58:43 newest-cni-613901 kubelet[1293]: I0111 07:58:43.590280    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j4pq\" (UniqueName: \"kubernetes.io/projected/71b2345b-ec04-4adc-8476-37a8fdd70756-kube-api-access-4j4pq\") pod \"kindnet-pvbpb\" (UID: \"71b2345b-ec04-4adc-8476-37a8fdd70756\") " pod="kube-system/kindnet-pvbpb"
	Jan 11 07:58:43 newest-cni-613901 kubelet[1293]: I0111 07:58:43.590733    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/496988d3-27d9-4ba8-b70b-f8a570aaa6aa-kube-proxy\") pod \"kube-proxy-8hjlp\" (UID: \"496988d3-27d9-4ba8-b70b-f8a570aaa6aa\") " pod="kube-system/kube-proxy-8hjlp"
	Jan 11 07:58:43 newest-cni-613901 kubelet[1293]: I0111 07:58:43.590792    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71b2345b-ec04-4adc-8476-37a8fdd70756-xtables-lock\") pod \"kindnet-pvbpb\" (UID: \"71b2345b-ec04-4adc-8476-37a8fdd70756\") " pod="kube-system/kindnet-pvbpb"
	Jan 11 07:58:43 newest-cni-613901 kubelet[1293]: I0111 07:58:43.590849    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71b2345b-ec04-4adc-8476-37a8fdd70756-lib-modules\") pod \"kindnet-pvbpb\" (UID: \"71b2345b-ec04-4adc-8476-37a8fdd70756\") " pod="kube-system/kindnet-pvbpb"
	Jan 11 07:58:43 newest-cni-613901 kubelet[1293]: I0111 07:58:43.590877    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/496988d3-27d9-4ba8-b70b-f8a570aaa6aa-lib-modules\") pod \"kube-proxy-8hjlp\" (UID: \"496988d3-27d9-4ba8-b70b-f8a570aaa6aa\") " pod="kube-system/kube-proxy-8hjlp"
	Jan 11 07:58:43 newest-cni-613901 kubelet[1293]: I0111 07:58:43.591040    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/496988d3-27d9-4ba8-b70b-f8a570aaa6aa-xtables-lock\") pod \"kube-proxy-8hjlp\" (UID: \"496988d3-27d9-4ba8-b70b-f8a570aaa6aa\") " pod="kube-system/kube-proxy-8hjlp"
	Jan 11 07:58:43 newest-cni-613901 kubelet[1293]: I0111 07:58:43.591073    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nphm2\" (UniqueName: \"kubernetes.io/projected/496988d3-27d9-4ba8-b70b-f8a570aaa6aa-kube-api-access-nphm2\") pod \"kube-proxy-8hjlp\" (UID: \"496988d3-27d9-4ba8-b70b-f8a570aaa6aa\") " pod="kube-system/kube-proxy-8hjlp"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-613901 -n newest-cni-613901
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-613901 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-tp2lb storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-613901 describe pod coredns-7d764666f9-tp2lb storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-613901 describe pod coredns-7d764666f9-tp2lb storage-provisioner: exit status 1 (56.532326ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-tp2lb" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-613901 describe pod coredns-7d764666f9-tp2lb storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-585688 --alsologtostderr -v=1
E0111 07:59:02.604739    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kindnet-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-585688 --alsologtostderr -v=1: exit status 80 (2.188904927s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-585688 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:59:02.403700  334502 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:59:02.403979  334502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:02.403989  334502 out.go:374] Setting ErrFile to fd 2...
	I0111 07:59:02.403993  334502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:02.404160  334502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:59:02.404414  334502 out.go:368] Setting JSON to false
	I0111 07:59:02.404431  334502 mustload.go:66] Loading cluster: default-k8s-diff-port-585688
	I0111 07:59:02.404748  334502 config.go:182] Loaded profile config "default-k8s-diff-port-585688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:59:02.405112  334502 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-585688 --format={{.State.Status}}
	I0111 07:59:02.423857  334502 host.go:66] Checking if "default-k8s-diff-port-585688" exists ...
	I0111 07:59:02.424125  334502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:59:02.479976  334502 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:67 SystemTime:2026-01-11 07:59:02.4692679 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:59:02.480583  334502 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-585688 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 07:59:02.482571  334502 out.go:179] * Pausing node default-k8s-diff-port-585688 ... 
	I0111 07:59:02.483865  334502 host.go:66] Checking if "default-k8s-diff-port-585688" exists ...
	I0111 07:59:02.484110  334502 ssh_runner.go:195] Run: systemctl --version
	I0111 07:59:02.484147  334502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-585688
	I0111 07:59:02.502828  334502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/default-k8s-diff-port-585688/id_rsa Username:docker}
	I0111 07:59:02.603486  334502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:59:02.615448  334502 pause.go:52] kubelet running: true
	I0111 07:59:02.615505  334502 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:59:02.771451  334502 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:59:02.771549  334502 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:59:02.838727  334502 cri.go:96] found id: "dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8"
	I0111 07:59:02.838757  334502 cri.go:96] found id: "11e65b52493b4e94f1416a401649f24baddb7f3c4e0ac9e58b75f1a543d20479"
	I0111 07:59:02.838761  334502 cri.go:96] found id: "ba9522fd01ed5cb4a0478ac7b6c3b1bf98766ee9be0146de39fa388399f29772"
	I0111 07:59:02.838764  334502 cri.go:96] found id: "36c8108840cbbfcbfffcbdcb463b3ce8cb8f3e93f5fe9edafb5daf5a090e8112"
	I0111 07:59:02.838767  334502 cri.go:96] found id: "506df3158c5adff8c547b0ed655b3c34bf43771c0d58f5100ffa1fc8044ac26d"
	I0111 07:59:02.838771  334502 cri.go:96] found id: "8f22f3b6659f979a42e531e71e66b83f2e5a22b700a721070ae92fe483579939"
	I0111 07:59:02.838774  334502 cri.go:96] found id: "63e746d642de550ccfb3f5c5bee17e53f1ac0ac24283ebc2ad8f7153b15dec23"
	I0111 07:59:02.838777  334502 cri.go:96] found id: "4f9f1c19c75f7ac1ecbae6c98fc1987feb975fc126f5bcd1565546bc42504918"
	I0111 07:59:02.838779  334502 cri.go:96] found id: "a5a86114d8fe5ac425c7c94815a54aaa43fc1dc726e4e3b0be78ef6291f0b2df"
	I0111 07:59:02.838796  334502 cri.go:96] found id: "5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478"
	I0111 07:59:02.838810  334502 cri.go:96] found id: "b582b197e069882b2c2b968a83d200c6e23f323a5a4f092959b23fde2b0b85d0"
	I0111 07:59:02.838815  334502 cri.go:96] found id: ""
	I0111 07:59:02.838857  334502 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:59:02.850429  334502 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:59:02Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:59:03.118004  334502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:59:03.130735  334502 pause.go:52] kubelet running: false
	I0111 07:59:03.130849  334502 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:59:03.265674  334502 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:59:03.265769  334502 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:59:03.329932  334502 cri.go:96] found id: "dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8"
	I0111 07:59:03.329958  334502 cri.go:96] found id: "11e65b52493b4e94f1416a401649f24baddb7f3c4e0ac9e58b75f1a543d20479"
	I0111 07:59:03.329964  334502 cri.go:96] found id: "ba9522fd01ed5cb4a0478ac7b6c3b1bf98766ee9be0146de39fa388399f29772"
	I0111 07:59:03.329977  334502 cri.go:96] found id: "36c8108840cbbfcbfffcbdcb463b3ce8cb8f3e93f5fe9edafb5daf5a090e8112"
	I0111 07:59:03.329981  334502 cri.go:96] found id: "506df3158c5adff8c547b0ed655b3c34bf43771c0d58f5100ffa1fc8044ac26d"
	I0111 07:59:03.329986  334502 cri.go:96] found id: "8f22f3b6659f979a42e531e71e66b83f2e5a22b700a721070ae92fe483579939"
	I0111 07:59:03.329991  334502 cri.go:96] found id: "63e746d642de550ccfb3f5c5bee17e53f1ac0ac24283ebc2ad8f7153b15dec23"
	I0111 07:59:03.329996  334502 cri.go:96] found id: "4f9f1c19c75f7ac1ecbae6c98fc1987feb975fc126f5bcd1565546bc42504918"
	I0111 07:59:03.330002  334502 cri.go:96] found id: "a5a86114d8fe5ac425c7c94815a54aaa43fc1dc726e4e3b0be78ef6291f0b2df"
	I0111 07:59:03.330009  334502 cri.go:96] found id: "5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478"
	I0111 07:59:03.330014  334502 cri.go:96] found id: "b582b197e069882b2c2b968a83d200c6e23f323a5a4f092959b23fde2b0b85d0"
	I0111 07:59:03.330019  334502 cri.go:96] found id: ""
	I0111 07:59:03.330059  334502 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:59:03.554369  334502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:59:03.567367  334502 pause.go:52] kubelet running: false
	I0111 07:59:03.567422  334502 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:59:03.703867  334502 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:59:03.703952  334502 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:59:03.769158  334502 cri.go:96] found id: "dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8"
	I0111 07:59:03.769180  334502 cri.go:96] found id: "11e65b52493b4e94f1416a401649f24baddb7f3c4e0ac9e58b75f1a543d20479"
	I0111 07:59:03.769184  334502 cri.go:96] found id: "ba9522fd01ed5cb4a0478ac7b6c3b1bf98766ee9be0146de39fa388399f29772"
	I0111 07:59:03.769187  334502 cri.go:96] found id: "36c8108840cbbfcbfffcbdcb463b3ce8cb8f3e93f5fe9edafb5daf5a090e8112"
	I0111 07:59:03.769189  334502 cri.go:96] found id: "506df3158c5adff8c547b0ed655b3c34bf43771c0d58f5100ffa1fc8044ac26d"
	I0111 07:59:03.769192  334502 cri.go:96] found id: "8f22f3b6659f979a42e531e71e66b83f2e5a22b700a721070ae92fe483579939"
	I0111 07:59:03.769195  334502 cri.go:96] found id: "63e746d642de550ccfb3f5c5bee17e53f1ac0ac24283ebc2ad8f7153b15dec23"
	I0111 07:59:03.769198  334502 cri.go:96] found id: "4f9f1c19c75f7ac1ecbae6c98fc1987feb975fc126f5bcd1565546bc42504918"
	I0111 07:59:03.769200  334502 cri.go:96] found id: "a5a86114d8fe5ac425c7c94815a54aaa43fc1dc726e4e3b0be78ef6291f0b2df"
	I0111 07:59:03.769207  334502 cri.go:96] found id: "5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478"
	I0111 07:59:03.769210  334502 cri.go:96] found id: "b582b197e069882b2c2b968a83d200c6e23f323a5a4f092959b23fde2b0b85d0"
	I0111 07:59:03.769212  334502 cri.go:96] found id: ""
	I0111 07:59:03.769261  334502 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:59:04.245920  334502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:59:04.260419  334502 pause.go:52] kubelet running: false
	I0111 07:59:04.260470  334502 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:59:04.430002  334502 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:59:04.430078  334502 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:59:04.513964  334502 cri.go:96] found id: "dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8"
	I0111 07:59:04.514000  334502 cri.go:96] found id: "11e65b52493b4e94f1416a401649f24baddb7f3c4e0ac9e58b75f1a543d20479"
	I0111 07:59:04.514006  334502 cri.go:96] found id: "ba9522fd01ed5cb4a0478ac7b6c3b1bf98766ee9be0146de39fa388399f29772"
	I0111 07:59:04.514011  334502 cri.go:96] found id: "36c8108840cbbfcbfffcbdcb463b3ce8cb8f3e93f5fe9edafb5daf5a090e8112"
	I0111 07:59:04.514016  334502 cri.go:96] found id: "506df3158c5adff8c547b0ed655b3c34bf43771c0d58f5100ffa1fc8044ac26d"
	I0111 07:59:04.514021  334502 cri.go:96] found id: "8f22f3b6659f979a42e531e71e66b83f2e5a22b700a721070ae92fe483579939"
	I0111 07:59:04.514025  334502 cri.go:96] found id: "63e746d642de550ccfb3f5c5bee17e53f1ac0ac24283ebc2ad8f7153b15dec23"
	I0111 07:59:04.514031  334502 cri.go:96] found id: "4f9f1c19c75f7ac1ecbae6c98fc1987feb975fc126f5bcd1565546bc42504918"
	I0111 07:59:04.514036  334502 cri.go:96] found id: "a5a86114d8fe5ac425c7c94815a54aaa43fc1dc726e4e3b0be78ef6291f0b2df"
	I0111 07:59:04.514052  334502 cri.go:96] found id: "5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478"
	I0111 07:59:04.514057  334502 cri.go:96] found id: "b582b197e069882b2c2b968a83d200c6e23f323a5a4f092959b23fde2b0b85d0"
	I0111 07:59:04.514062  334502 cri.go:96] found id: ""
	I0111 07:59:04.514111  334502 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:59:04.530700  334502 out.go:203] 
	W0111 07:59:04.532024  334502 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:59:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:59:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:59:04.532049  334502 out.go:285] * 
	* 
	W0111 07:59:04.534320  334502 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:59:04.535768  334502 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-585688 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-585688
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-585688:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b",
	        "Created": "2026-01-11T07:57:00.448699199Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321366,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:58:03.98416149Z",
	            "FinishedAt": "2026-01-11T07:58:03.096284685Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/hosts",
	        "LogPath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b-json.log",
	        "Name": "/default-k8s-diff-port-585688",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-585688:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-585688",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b",
	                "LowerDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-585688",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-585688/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-585688",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-585688",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-585688",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dd2c0bafd76c52aeee70903f034bdb16d3924df2fcd183472f2090a195870109",
	            "SandboxKey": "/var/run/docker/netns/dd2c0bafd76c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-585688": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a19185e10f04402d7102cb5e5f5b024dff862ce9b51ccbccdcfe21cef3be911e",
	                    "EndpointID": "d84fd8535d958614b81e0353741aa7b015805b499f39d2d066206e810946a5c7",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "4e:00:df:95:98:4c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-585688",
	                        "4e491c631d37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688: exit status 2 (357.971411ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-585688 logs -n 25
E0111 07:59:05.165719    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kindnet-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-585688 logs -n 25: (1.091700494s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ image   │ old-k8s-version-086314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p old-k8s-version-086314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ image   │ no-preload-875594 image list --format=json                                                                                                                                                                                                    │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p no-preload-875594 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ embed-certs-285966 image list --format=json                                                                                                                                                                                                   │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p embed-certs-285966 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p no-preload-875594                                                                                                                                                                                                                          │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p embed-certs-285966                                                                                                                                                                                                                         │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p no-preload-875594                                                                                                                                                                                                                          │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-gcs-476138 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-476138        │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p embed-certs-285966                                                                                                                                                                                                                         │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p test-preload-dl-gcs-476138                                                                                                                                                                                                                 │ test-preload-dl-gcs-476138        │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-github-044639 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-044639     │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p test-preload-dl-github-044639                                                                                                                                                                                                              │ test-preload-dl-github-044639     │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-787958 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-787958 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-787958                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-787958 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-613901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ stop    │ -p newest-cni-613901 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:59 UTC │
	│ image   │ default-k8s-diff-port-585688 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ pause   │ -p default-k8s-diff-port-585688 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-613901 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:59:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:59:04.424321  334964 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:59:04.424599  334964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:04.424609  334964 out.go:374] Setting ErrFile to fd 2...
	I0111 07:59:04.424616  334964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:04.424850  334964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:59:04.425313  334964 out.go:368] Setting JSON to false
	I0111 07:59:04.426460  334964 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2492,"bootTime":1768115852,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:59:04.426513  334964 start.go:143] virtualization: kvm guest
	I0111 07:59:04.428404  334964 out.go:179] * [newest-cni-613901] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:59:04.429688  334964 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:59:04.429693  334964 notify.go:221] Checking for updates...
	I0111 07:59:04.432070  334964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:59:04.433340  334964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:59:04.434705  334964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:59:04.435864  334964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:59:04.437099  334964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:59:04.438644  334964 config.go:182] Loaded profile config "newest-cni-613901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:59:04.439184  334964 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:59:04.469255  334964 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:59:04.469400  334964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:59:04.532551  334964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-11 07:59:04.522566296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:59:04.532660  334964 docker.go:319] overlay module found
	I0111 07:59:04.534785  334964 out.go:179] * Using the docker driver based on existing profile
	I0111 07:59:04.536472  334964 start.go:309] selected driver: docker
	I0111 07:59:04.536485  334964 start.go:928] validating driver "docker" against &{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:59:04.536564  334964 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:59:04.537160  334964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:59:04.596657  334964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-11 07:59:04.586303209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:59:04.597010  334964 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 07:59:04.597044  334964 cni.go:84] Creating CNI manager for ""
	I0111 07:59:04.597131  334964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:59:04.597199  334964 start.go:353] cluster config:
	{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:59:04.599224  334964 out.go:179] * Starting "newest-cni-613901" primary control-plane node in "newest-cni-613901" cluster
	I0111 07:59:04.601496  334964 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:59:04.604261  334964 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:59:04.605369  334964 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:59:04.605402  334964 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:59:04.605417  334964 cache.go:65] Caching tarball of preloaded images
	I0111 07:59:04.605467  334964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:59:04.605487  334964 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:59:04.605494  334964 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:59:04.605578  334964 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/config.json ...
	I0111 07:59:04.627520  334964 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:59:04.627538  334964 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:59:04.627553  334964 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:59:04.627579  334964 start.go:360] acquireMachinesLock for newest-cni-613901: {Name:mk432dc96cb4574bc050bdfd86d1212df9b7472d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:59:04.627639  334964 start.go:364] duration metric: took 35.746µs to acquireMachinesLock for "newest-cni-613901"
	I0111 07:59:04.627656  334964 start.go:96] Skipping create...Using existing machine configuration
	I0111 07:59:04.627663  334964 fix.go:54] fixHost starting: 
	I0111 07:59:04.627886  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:04.649205  334964 fix.go:112] recreateIfNeeded on newest-cni-613901: state=Stopped err=<nil>
	W0111 07:59:04.649242  334964 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Jan 11 07:58:31 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:31.324774852Z" level=info msg="Started container" PID=1776 containerID=242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper id=ec4e5bcb-daf2-407d-ba8b-f5a5599263d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd3ba93cc5adaa2a466d7f40d80c17bd50b3418e7bb934b2d627a6f517eb8e30
	Jan 11 07:58:31 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:31.370046434Z" level=info msg="Removing container: 0c34f50ecf43c7aba09a03efe28f574baf9892bb7e3428221642c03ca41a9aff" id=8e8679e3-c4dd-42d4-88e9-9ca2f16e73e1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:31 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:31.3799179Z" level=info msg="Removed container 0c34f50ecf43c7aba09a03efe28f574baf9892bb7e3428221642c03ca41a9aff: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper" id=8e8679e3-c4dd-42d4-88e9-9ca2f16e73e1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.404917252Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ce1cbd0d-6ad7-48a2-a527-07245bdaadb4 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.406034783Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1b572cb3-bab3-410f-b0b8-5431605877fd name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.40746759Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=138e2fc2-7773-46a0-b73e-79cf365c5ad1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.407630705Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.413666708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.413956302Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c9fa5bfff5ef4edff5de90af1648e7d2578238c66f6603748357b43c01069681/merged/etc/passwd: no such file or directory"
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.413997548Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c9fa5bfff5ef4edff5de90af1648e7d2578238c66f6603748357b43c01069681/merged/etc/group: no such file or directory"
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.414318211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.459539414Z" level=info msg="Created container dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8: kube-system/storage-provisioner/storage-provisioner" id=138e2fc2-7773-46a0-b73e-79cf365c5ad1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.460253924Z" level=info msg="Starting container: dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8" id=de9e1c1c-6ed7-46a0-aa19-819d9a2570fe name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.462425015Z" level=info msg="Started container" PID=1790 containerID=dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8 description=kube-system/storage-provisioner/storage-provisioner id=de9e1c1c-6ed7-46a0-aa19-819d9a2570fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8a1f86285d7f65a34470415a530a541609104a8b79802c3b5634251e944f004
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.282392151Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=97532016-fbbb-4d8b-b5a3-a0831be05c67 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.283431377Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3112da97-3fb3-4bff-9bc5-a2d9649ec202 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.284456765Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper" id=f191c917-c1c8-4f8d-a138-1b1485e13630 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.284569816Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.290195273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.290691042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.318871012Z" level=info msg="Created container 5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper" id=f191c917-c1c8-4f8d-a138-1b1485e13630 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.319470408Z" level=info msg="Starting container: 5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478" id=7b0d1090-1ebf-4cff-b788-88318b142603 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.321501427Z" level=info msg="Started container" PID=1826 containerID=5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper id=7b0d1090-1ebf-4cff-b788-88318b142603 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd3ba93cc5adaa2a466d7f40d80c17bd50b3418e7bb934b2d627a6f517eb8e30
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.441844962Z" level=info msg="Removing container: 242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714" id=0b289c0b-0975-4917-8417-83af42d200e4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.451483539Z" level=info msg="Removed container 242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper" id=0b289c0b-0975-4917-8417-83af42d200e4 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	5edc90c892262       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   dd3ba93cc5ada       dashboard-metrics-scraper-867fb5f87b-4pb6z             kubernetes-dashboard
	dcee32095eab7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   c8a1f86285d7f       storage-provisioner                                    kube-system
	b582b197e0698       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   4903293d1765e       kubernetes-dashboard-b84665fb8-z4znd                   kubernetes-dashboard
	11e65b52493b4       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           51 seconds ago      Running             coredns                     0                   4fe7a241c7a13       coredns-7d764666f9-9n4hp                               kube-system
	6f273e5d5e1ad       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   27811c43a42b5       busybox                                                default
	ba9522fd01ed5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   c8a1f86285d7f       storage-provisioner                                    kube-system
	36c8108840cbb       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           51 seconds ago      Running             kindnet-cni                 0                   4b007a88380be       kindnet-lz5hh                                          kube-system
	506df3158c5ad       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           51 seconds ago      Running             kube-proxy                  0                   c680fdc6a2f68       kube-proxy-5z6rc                                       kube-system
	8f22f3b6659f9       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           54 seconds ago      Running             kube-apiserver              0                   e3e5b8447c647       kube-apiserver-default-k8s-diff-port-585688            kube-system
	63e746d642de5       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           54 seconds ago      Running             kube-scheduler              0                   fdd3a0c5d40ad       kube-scheduler-default-k8s-diff-port-585688            kube-system
	4f9f1c19c75f7       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           54 seconds ago      Running             etcd                        0                   399310dc0edec       etcd-default-k8s-diff-port-585688                      kube-system
	a5a86114d8fe5       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           54 seconds ago      Running             kube-controller-manager     0                   f61108a4d4416       kube-controller-manager-default-k8s-diff-port-585688   kube-system
	
	
	==> coredns [11e65b52493b4e94f1416a401649f24baddb7f3c4e0ac9e58b75f1a543d20479] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:40032 - 12963 "HINFO IN 7207735516347211502.7330963781429141352. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016757651s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-585688
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-585688
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=default-k8s-diff-port-585688
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_57_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:57:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-585688
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:58:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:58:42 +0000   Sun, 11 Jan 2026 07:57:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:58:42 +0000   Sun, 11 Jan 2026 07:57:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:58:42 +0000   Sun, 11 Jan 2026 07:57:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:58:42 +0000   Sun, 11 Jan 2026 07:57:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-585688
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                a93c91b9-2231-4c51-8ee3-8e1fc4eb7cf8
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-9n4hp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-default-k8s-diff-port-585688                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-lz5hh                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-default-k8s-diff-port-585688             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-585688    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-5z6rc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-default-k8s-diff-port-585688             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-4pb6z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-z4znd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node default-k8s-diff-port-585688 event: Registered Node default-k8s-diff-port-585688 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node default-k8s-diff-port-585688 event: Registered Node default-k8s-diff-port-585688 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [4f9f1c19c75f7ac1ecbae6c98fc1987feb975fc126f5bcd1565546bc42504918] <==
	{"level":"info","ts":"2026-01-11T07:58:11.549787Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:58:11.549834Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T07:58:11.549847Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-11T07:58:11.551083Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:default-k8s-diff-port-585688 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:58:11.551102Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:58:11.551082Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:58:11.551364Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:58:11.551396Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:58:11.552488Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:58:11.552553Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:58:11.555952Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2026-01-11T07:58:11.555998Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2026-01-11T07:58:21.454756Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"218.802827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-9n4hp\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2026-01-11T07:58:21.454866Z","caller":"traceutil/trace.go:172","msg":"trace[289215474] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-9n4hp; range_end:; response_count:1; response_revision:586; }","duration":"218.934123ms","start":"2026-01-11T07:58:21.235917Z","end":"2026-01-11T07:58:21.454851Z","steps":["trace[289215474] 'range keys from in-memory index tree'  (duration: 218.641073ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-11T07:58:21.514898Z","caller":"traceutil/trace.go:172","msg":"trace[162352008] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:615; }","duration":"171.715166ms","start":"2026-01-11T07:58:21.343151Z","end":"2026-01-11T07:58:21.514866Z","steps":["trace[162352008] 'read index received'  (duration: 171.693319ms)","trace[162352008] 'applied index is now lower than readState.Index'  (duration: 20.42µs)"],"step_count":2}
	{"level":"info","ts":"2026-01-11T07:58:21.515030Z","caller":"traceutil/trace.go:172","msg":"trace[130066207] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"172.092548ms","start":"2026-01-11T07:58:21.342920Z","end":"2026-01-11T07:58:21.515013Z","steps":["trace[130066207] 'process raft request'  (duration: 171.983629ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-11T07:58:21.515125Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.96068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z\" limit:1 ","response":"range_response_count:1 size:4622"}
	{"level":"info","ts":"2026-01-11T07:58:21.515652Z","caller":"traceutil/trace.go:172","msg":"trace[1994663194] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z; range_end:; response_count:1; response_revision:586; }","duration":"172.067197ms","start":"2026-01-11T07:58:21.343119Z","end":"2026-01-11T07:58:21.515186Z","steps":["trace[1994663194] 'agreement among raft nodes before linearized reading'  (duration: 171.831084ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-11T07:58:21.913623Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.278798ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571767444569432784 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" mod_revision:510 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" value_size:7682 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" > >>","response":"size:16"}
	{"level":"info","ts":"2026-01-11T07:58:21.913704Z","caller":"traceutil/trace.go:172","msg":"trace[1891278321] linearizableReadLoop","detail":"{readStateIndex:619; appliedIndex:618; }","duration":"177.930741ms","start":"2026-01-11T07:58:21.735762Z","end":"2026-01-11T07:58:21.913693Z","steps":["trace[1891278321] 'read index received'  (duration: 60.609µs)","trace[1891278321] 'applied index is now lower than readState.Index'  (duration: 177.869403ms)"],"step_count":2}
	{"level":"warn","ts":"2026-01-11T07:58:21.913867Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.100206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-9n4hp\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2026-01-11T07:58:21.914030Z","caller":"traceutil/trace.go:172","msg":"trace[591670269] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"323.543214ms","start":"2026-01-11T07:58:21.590456Z","end":"2026-01-11T07:58:21.913999Z","steps":["trace[591670269] 'process raft request'  (duration: 118.495544ms)","trace[591670269] 'compare'  (duration: 204.061463ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-11T07:58:21.913946Z","caller":"traceutil/trace.go:172","msg":"trace[89422722] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-9n4hp; range_end:; response_count:1; response_revision:590; }","duration":"178.181821ms","start":"2026-01-11T07:58:21.735755Z","end":"2026-01-11T07:58:21.913937Z","steps":["trace[89422722] 'agreement among raft nodes before linearized reading'  (duration: 177.998489ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-11T07:58:21.914164Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2026-01-11T07:58:21.590428Z","time spent":"323.676965ms","remote":"127.0.0.1:43076","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7769,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" mod_revision:510 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" value_size:7682 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" > >"}
	{"level":"info","ts":"2026-01-11T07:58:33.229917Z","caller":"traceutil/trace.go:172","msg":"trace[741921538] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"129.560386ms","start":"2026-01-11T07:58:33.100309Z","end":"2026-01-11T07:58:33.229870Z","steps":["trace[741921538] 'process raft request'  (duration: 52.99275ms)","trace[741921538] 'compare'  (duration: 76.316437ms)"],"step_count":2}
	
	
	==> kernel <==
	 07:59:05 up 41 min,  0 user,  load average: 2.72, 3.32, 2.51
	Linux default-k8s-diff-port-585688 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36c8108840cbbfcbfffcbdcb463b3ce8cb8f3e93f5fe9edafb5daf5a090e8112] <==
	I0111 07:58:13.892698       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:58:13.893006       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0111 07:58:13.893342       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:58:13.893379       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:58:13.893405       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:58:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:58:14.097273       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:58:14.097320       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:58:14.097335       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:58:14.097485       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:58:14.397881       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:58:14.397911       1 metrics.go:72] Registering metrics
	I0111 07:58:14.398041       1 controller.go:711] "Syncing nftables rules"
	I0111 07:58:24.097949       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:58:24.098019       1 main.go:301] handling current node
	I0111 07:58:34.097918       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:58:34.097976       1 main.go:301] handling current node
	I0111 07:58:44.098150       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:58:44.098189       1 main.go:301] handling current node
	I0111 07:58:54.099678       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:58:54.099733       1 main.go:301] handling current node
	I0111 07:59:04.100905       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:59:04.100950       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f22f3b6659f979a42e531e71e66b83f2e5a22b700a721070ae92fe483579939] <==
	I0111 07:58:12.625146       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0111 07:58:12.625347       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:12.625424       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 07:58:12.625500       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:12.625510       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:12.626434       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 07:58:12.626471       1 aggregator.go:187] initial CRD sync complete...
	I0111 07:58:12.626485       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 07:58:12.626492       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 07:58:12.626498       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:58:12.631597       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0111 07:58:12.633969       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 07:58:12.642930       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 07:58:12.674728       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:58:12.924397       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:58:12.960765       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:58:12.984278       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:58:12.993878       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:58:13.003294       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:58:13.039510       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.59.43"}
	I0111 07:58:13.050640       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.56.189"}
	I0111 07:58:13.528258       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:58:16.215658       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:58:16.266423       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:58:16.465518       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a5a86114d8fe5ac425c7c94815a54aaa43fc1dc726e4e3b0be78ef6291f0b2df] <==
	I0111 07:58:15.771090       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771147       1 range_allocator.go:177] "Sending events to api server"
	I0111 07:58:15.771195       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0111 07:58:15.771201       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:58:15.771206       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771259       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771304       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771315       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771331       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771384       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771417       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771441       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771594       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771613       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771695       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771724       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.772466       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.772512       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.776335       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.780639       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:58:15.871219       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.871241       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:58:15.871246       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:58:15.881166       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:16.470065       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [506df3158c5adff8c547b0ed655b3c34bf43771c0d58f5100ffa1fc8044ac26d] <==
	I0111 07:58:13.711138       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:58:13.772878       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:58:13.874018       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:13.874054       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0111 07:58:13.874154       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:58:13.894504       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:58:13.894573       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:58:13.900324       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:58:13.900617       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:58:13.900633       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:58:13.901880       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:58:13.901994       1 config.go:200] "Starting service config controller"
	I0111 07:58:13.902039       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:58:13.902010       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:58:13.901968       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:58:13.902280       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:58:13.902346       1 config.go:309] "Starting node config controller"
	I0111 07:58:13.902354       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:58:13.902360       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:58:14.002435       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:58:14.002435       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:58:14.002501       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [63e746d642de550ccfb3f5c5bee17e53f1ac0ac24283ebc2ad8f7153b15dec23] <==
	I0111 07:58:11.236674       1 serving.go:386] Generated self-signed cert in-memory
	W0111 07:58:12.560265       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 07:58:12.560303       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 07:58:12.560315       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 07:58:12.560326       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 07:58:12.595875       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 07:58:12.595916       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:58:12.599567       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 07:58:12.599600       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:58:12.599699       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 07:58:12.599842       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 07:58:12.700067       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:58:29 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:29.431911     744 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-4pb6z_kubernetes-dashboard(b661fc50-c68f-431c-a8d5-c0e1231197bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" podUID="b661fc50-c68f-431c-a8d5-c0e1231197bc"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:31.281737     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:31.281792     744 scope.go:122] "RemoveContainer" containerID="0c34f50ecf43c7aba09a03efe28f574baf9892bb7e3428221642c03ca41a9aff"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:31.368696     744 scope.go:122] "RemoveContainer" containerID="0c34f50ecf43c7aba09a03efe28f574baf9892bb7e3428221642c03ca41a9aff"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:31.368996     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:31.369033     744 scope.go:122] "RemoveContainer" containerID="242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:31.369246     744 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-4pb6z_kubernetes-dashboard(b661fc50-c68f-431c-a8d5-c0e1231197bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" podUID="b661fc50-c68f-431c-a8d5-c0e1231197bc"
	Jan 11 07:58:39 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:39.432024     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:39 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:39.432061     744 scope.go:122] "RemoveContainer" containerID="242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714"
	Jan 11 07:58:39 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:39.432239     744 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-4pb6z_kubernetes-dashboard(b661fc50-c68f-431c-a8d5-c0e1231197bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" podUID="b661fc50-c68f-431c-a8d5-c0e1231197bc"
	Jan 11 07:58:44 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:44.404157     744 scope.go:122] "RemoveContainer" containerID="ba9522fd01ed5cb4a0478ac7b6c3b1bf98766ee9be0146de39fa388399f29772"
	Jan 11 07:58:48 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:48.742674     744 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9n4hp" containerName="coredns"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:57.281648     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:57.281704     744 scope.go:122] "RemoveContainer" containerID="242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:57.440596     744 scope.go:122] "RemoveContainer" containerID="242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:57.440905     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:57.440942     744 scope.go:122] "RemoveContainer" containerID="5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:57.441157     744 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-4pb6z_kubernetes-dashboard(b661fc50-c68f-431c-a8d5-c0e1231197bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" podUID="b661fc50-c68f-431c-a8d5-c0e1231197bc"
	Jan 11 07:58:59 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:59.431764     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:59 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:59.431828     744 scope.go:122] "RemoveContainer" containerID="5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478"
	Jan 11 07:58:59 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:59.432040     744 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-4pb6z_kubernetes-dashboard(b661fc50-c68f-431c-a8d5-c0e1231197bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" podUID="b661fc50-c68f-431c-a8d5-c0e1231197bc"
	Jan 11 07:59:02 default-k8s-diff-port-585688 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:59:02 default-k8s-diff-port-585688 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:59:02 default-k8s-diff-port-585688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 07:59:02 default-k8s-diff-port-585688 systemd[1]: kubelet.service: Consumed 1.783s CPU time.
	
	
	==> kubernetes-dashboard [b582b197e069882b2c2b968a83d200c6e23f323a5a4f092959b23fde2b0b85d0] <==
	2026/01/11 07:58:23 Using namespace: kubernetes-dashboard
	2026/01/11 07:58:23 Using in-cluster config to connect to apiserver
	2026/01/11 07:58:23 Using secret token for csrf signing
	2026/01/11 07:58:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 07:58:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 07:58:23 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 07:58:23 Generating JWE encryption key
	2026/01/11 07:58:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 07:58:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 07:58:24 Initializing JWE encryption key from synchronized object
	2026/01/11 07:58:24 Creating in-cluster Sidecar client
	2026/01/11 07:58:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:58:24 Serving insecurely on HTTP port: 9090
	2026/01/11 07:58:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:58:23 Starting overwatch
	
	
	==> storage-provisioner [ba9522fd01ed5cb4a0478ac7b6c3b1bf98766ee9be0146de39fa388399f29772] <==
	I0111 07:58:13.666882       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 07:58:43.670590       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8] <==
	I0111 07:58:44.478538       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:58:44.488245       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:58:44.488317       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 07:58:44.491376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:47.947032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:52.208063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:55.806672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:58.859904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:01.882391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:01.886981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:59:01.887139       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:59:01.887283       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-585688_2c03ff84-8d4f-4730-8f7c-604368526a2a!
	I0111 07:59:01.887280       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"06bdb24f-e9c9-497c-a246-c1be0eb5956d", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-585688_2c03ff84-8d4f-4730-8f7c-604368526a2a became leader
	W0111 07:59:01.889417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:01.893477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:59:01.987511       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-585688_2c03ff84-8d4f-4730-8f7c-604368526a2a!
	W0111 07:59:03.896602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:03.900791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:05.904632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:05.909415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688: exit status 2 (327.604067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-585688 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-585688
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-585688:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b",
	        "Created": "2026-01-11T07:57:00.448699199Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321366,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:58:03.98416149Z",
	            "FinishedAt": "2026-01-11T07:58:03.096284685Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/hosts",
	        "LogPath": "/var/lib/docker/containers/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b/4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b-json.log",
	        "Name": "/default-k8s-diff-port-585688",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-585688:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-585688",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4e491c631d37cd114a87ec85803ddb84ccce4a183e656efbfac985531201e66b",
	                "LowerDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31bb134ce8349339fa6c08d8e9235d969f6e89a7b0ac8aa0139d1b384ddee071/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-585688",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-585688/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-585688",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-585688",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-585688",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dd2c0bafd76c52aeee70903f034bdb16d3924df2fcd183472f2090a195870109",
	            "SandboxKey": "/var/run/docker/netns/dd2c0bafd76c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-585688": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a19185e10f04402d7102cb5e5f5b024dff862ce9b51ccbccdcfe21cef3be911e",
	                    "EndpointID": "d84fd8535d958614b81e0353741aa7b015805b499f39d2d066206e810946a5c7",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "4e:00:df:95:98:4c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-585688",
	                        "4e491c631d37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688: exit status 2 (324.280485ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-585688 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-585688 logs -n 25: (1.038209196s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ image   │ old-k8s-version-086314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p old-k8s-version-086314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ image   │ no-preload-875594 image list --format=json                                                                                                                                                                                                    │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p no-preload-875594 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ embed-certs-285966 image list --format=json                                                                                                                                                                                                   │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p embed-certs-285966 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p no-preload-875594                                                                                                                                                                                                                          │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p embed-certs-285966                                                                                                                                                                                                                         │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p no-preload-875594                                                                                                                                                                                                                          │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-gcs-476138 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-476138        │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p embed-certs-285966                                                                                                                                                                                                                         │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p test-preload-dl-gcs-476138                                                                                                                                                                                                                 │ test-preload-dl-gcs-476138        │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-github-044639 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-044639     │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p test-preload-dl-github-044639                                                                                                                                                                                                              │ test-preload-dl-github-044639     │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-787958 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-787958 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-787958                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-787958 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-613901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ stop    │ -p newest-cni-613901 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:59 UTC │
	│ image   │ default-k8s-diff-port-585688 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ pause   │ -p default-k8s-diff-port-585688 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-613901 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:59:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:59:04.424321  334964 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:59:04.424599  334964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:04.424609  334964 out.go:374] Setting ErrFile to fd 2...
	I0111 07:59:04.424616  334964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:04.424850  334964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:59:04.425313  334964 out.go:368] Setting JSON to false
	I0111 07:59:04.426460  334964 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2492,"bootTime":1768115852,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:59:04.426513  334964 start.go:143] virtualization: kvm guest
	I0111 07:59:04.428404  334964 out.go:179] * [newest-cni-613901] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:59:04.429688  334964 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:59:04.429693  334964 notify.go:221] Checking for updates...
	I0111 07:59:04.432070  334964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:59:04.433340  334964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:59:04.434705  334964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:59:04.435864  334964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:59:04.437099  334964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:59:04.438644  334964 config.go:182] Loaded profile config "newest-cni-613901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:59:04.439184  334964 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:59:04.469255  334964 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:59:04.469400  334964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:59:04.532551  334964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-11 07:59:04.522566296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:59:04.532660  334964 docker.go:319] overlay module found
	I0111 07:59:04.534785  334964 out.go:179] * Using the docker driver based on existing profile
	I0111 07:59:04.536472  334964 start.go:309] selected driver: docker
	I0111 07:59:04.536485  334964 start.go:928] validating driver "docker" against &{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:59:04.536564  334964 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:59:04.537160  334964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:59:04.596657  334964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-11 07:59:04.586303209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:59:04.597010  334964 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 07:59:04.597044  334964 cni.go:84] Creating CNI manager for ""
	I0111 07:59:04.597131  334964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:59:04.597199  334964 start.go:353] cluster config:
	{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:59:04.599224  334964 out.go:179] * Starting "newest-cni-613901" primary control-plane node in "newest-cni-613901" cluster
	I0111 07:59:04.601496  334964 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:59:04.604261  334964 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:59:04.605369  334964 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:59:04.605402  334964 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:59:04.605417  334964 cache.go:65] Caching tarball of preloaded images
	I0111 07:59:04.605467  334964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:59:04.605487  334964 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:59:04.605494  334964 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:59:04.605578  334964 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/config.json ...
	I0111 07:59:04.627520  334964 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:59:04.627538  334964 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:59:04.627553  334964 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:59:04.627579  334964 start.go:360] acquireMachinesLock for newest-cni-613901: {Name:mk432dc96cb4574bc050bdfd86d1212df9b7472d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:59:04.627639  334964 start.go:364] duration metric: took 35.746µs to acquireMachinesLock for "newest-cni-613901"
	I0111 07:59:04.627656  334964 start.go:96] Skipping create...Using existing machine configuration
	I0111 07:59:04.627663  334964 fix.go:54] fixHost starting: 
	I0111 07:59:04.627886  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:04.649205  334964 fix.go:112] recreateIfNeeded on newest-cni-613901: state=Stopped err=<nil>
	W0111 07:59:04.649242  334964 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Jan 11 07:58:31 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:31.324774852Z" level=info msg="Started container" PID=1776 containerID=242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper id=ec4e5bcb-daf2-407d-ba8b-f5a5599263d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd3ba93cc5adaa2a466d7f40d80c17bd50b3418e7bb934b2d627a6f517eb8e30
	Jan 11 07:58:31 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:31.370046434Z" level=info msg="Removing container: 0c34f50ecf43c7aba09a03efe28f574baf9892bb7e3428221642c03ca41a9aff" id=8e8679e3-c4dd-42d4-88e9-9ca2f16e73e1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:31 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:31.3799179Z" level=info msg="Removed container 0c34f50ecf43c7aba09a03efe28f574baf9892bb7e3428221642c03ca41a9aff: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper" id=8e8679e3-c4dd-42d4-88e9-9ca2f16e73e1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.404917252Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ce1cbd0d-6ad7-48a2-a527-07245bdaadb4 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.406034783Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1b572cb3-bab3-410f-b0b8-5431605877fd name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.40746759Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=138e2fc2-7773-46a0-b73e-79cf365c5ad1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.407630705Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.413666708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.413956302Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c9fa5bfff5ef4edff5de90af1648e7d2578238c66f6603748357b43c01069681/merged/etc/passwd: no such file or directory"
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.413997548Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c9fa5bfff5ef4edff5de90af1648e7d2578238c66f6603748357b43c01069681/merged/etc/group: no such file or directory"
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.414318211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.459539414Z" level=info msg="Created container dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8: kube-system/storage-provisioner/storage-provisioner" id=138e2fc2-7773-46a0-b73e-79cf365c5ad1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.460253924Z" level=info msg="Starting container: dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8" id=de9e1c1c-6ed7-46a0-aa19-819d9a2570fe name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:44 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:44.462425015Z" level=info msg="Started container" PID=1790 containerID=dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8 description=kube-system/storage-provisioner/storage-provisioner id=de9e1c1c-6ed7-46a0-aa19-819d9a2570fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8a1f86285d7f65a34470415a530a541609104a8b79802c3b5634251e944f004
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.282392151Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=97532016-fbbb-4d8b-b5a3-a0831be05c67 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.283431377Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3112da97-3fb3-4bff-9bc5-a2d9649ec202 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.284456765Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper" id=f191c917-c1c8-4f8d-a138-1b1485e13630 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.284569816Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.290195273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.290691042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.318871012Z" level=info msg="Created container 5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper" id=f191c917-c1c8-4f8d-a138-1b1485e13630 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.319470408Z" level=info msg="Starting container: 5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478" id=7b0d1090-1ebf-4cff-b788-88318b142603 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.321501427Z" level=info msg="Started container" PID=1826 containerID=5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper id=7b0d1090-1ebf-4cff-b788-88318b142603 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd3ba93cc5adaa2a466d7f40d80c17bd50b3418e7bb934b2d627a6f517eb8e30
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.441844962Z" level=info msg="Removing container: 242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714" id=0b289c0b-0975-4917-8417-83af42d200e4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 07:58:57 default-k8s-diff-port-585688 crio[580]: time="2026-01-11T07:58:57.451483539Z" level=info msg="Removed container 242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z/dashboard-metrics-scraper" id=0b289c0b-0975-4917-8417-83af42d200e4 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	5edc90c892262       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   3                   dd3ba93cc5ada       dashboard-metrics-scraper-867fb5f87b-4pb6z             kubernetes-dashboard
	dcee32095eab7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   c8a1f86285d7f       storage-provisioner                                    kube-system
	b582b197e0698       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   4903293d1765e       kubernetes-dashboard-b84665fb8-z4znd                   kubernetes-dashboard
	11e65b52493b4       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           53 seconds ago      Running             coredns                     0                   4fe7a241c7a13       coredns-7d764666f9-9n4hp                               kube-system
	6f273e5d5e1ad       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   27811c43a42b5       busybox                                                default
	ba9522fd01ed5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   c8a1f86285d7f       storage-provisioner                                    kube-system
	36c8108840cbb       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           53 seconds ago      Running             kindnet-cni                 0                   4b007a88380be       kindnet-lz5hh                                          kube-system
	506df3158c5ad       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           53 seconds ago      Running             kube-proxy                  0                   c680fdc6a2f68       kube-proxy-5z6rc                                       kube-system
	8f22f3b6659f9       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           56 seconds ago      Running             kube-apiserver              0                   e3e5b8447c647       kube-apiserver-default-k8s-diff-port-585688            kube-system
	63e746d642de5       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           56 seconds ago      Running             kube-scheduler              0                   fdd3a0c5d40ad       kube-scheduler-default-k8s-diff-port-585688            kube-system
	4f9f1c19c75f7       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           56 seconds ago      Running             etcd                        0                   399310dc0edec       etcd-default-k8s-diff-port-585688                      kube-system
	a5a86114d8fe5       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           56 seconds ago      Running             kube-controller-manager     0                   f61108a4d4416       kube-controller-manager-default-k8s-diff-port-585688   kube-system
	
	
	==> coredns [11e65b52493b4e94f1416a401649f24baddb7f3c4e0ac9e58b75f1a543d20479] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:40032 - 12963 "HINFO IN 7207735516347211502.7330963781429141352. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016757651s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-585688
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-585688
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=default-k8s-diff-port-585688
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_57_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:57:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-585688
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:58:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:58:42 +0000   Sun, 11 Jan 2026 07:57:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:58:42 +0000   Sun, 11 Jan 2026 07:57:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:58:42 +0000   Sun, 11 Jan 2026 07:57:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 07:58:42 +0000   Sun, 11 Jan 2026 07:57:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-585688
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                a93c91b9-2231-4c51-8ee3-8e1fc4eb7cf8
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-9n4hp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-585688                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-lz5hh                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-default-k8s-diff-port-585688             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-585688    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-5z6rc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-default-k8s-diff-port-585688             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-4pb6z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-z4znd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node default-k8s-diff-port-585688 event: Registered Node default-k8s-diff-port-585688 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node default-k8s-diff-port-585688 event: Registered Node default-k8s-diff-port-585688 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [4f9f1c19c75f7ac1ecbae6c98fc1987feb975fc126f5bcd1565546bc42504918] <==
	{"level":"info","ts":"2026-01-11T07:58:11.549787Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:58:11.549834Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T07:58:11.549847Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-11T07:58:11.551083Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:default-k8s-diff-port-585688 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:58:11.551102Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:58:11.551082Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:58:11.551364Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:58:11.551396Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:58:11.552488Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:58:11.552553Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:58:11.555952Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2026-01-11T07:58:11.555998Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2026-01-11T07:58:21.454756Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"218.802827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-9n4hp\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2026-01-11T07:58:21.454866Z","caller":"traceutil/trace.go:172","msg":"trace[289215474] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-9n4hp; range_end:; response_count:1; response_revision:586; }","duration":"218.934123ms","start":"2026-01-11T07:58:21.235917Z","end":"2026-01-11T07:58:21.454851Z","steps":["trace[289215474] 'range keys from in-memory index tree'  (duration: 218.641073ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-11T07:58:21.514898Z","caller":"traceutil/trace.go:172","msg":"trace[162352008] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:615; }","duration":"171.715166ms","start":"2026-01-11T07:58:21.343151Z","end":"2026-01-11T07:58:21.514866Z","steps":["trace[162352008] 'read index received'  (duration: 171.693319ms)","trace[162352008] 'applied index is now lower than readState.Index'  (duration: 20.42µs)"],"step_count":2}
	{"level":"info","ts":"2026-01-11T07:58:21.515030Z","caller":"traceutil/trace.go:172","msg":"trace[130066207] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"172.092548ms","start":"2026-01-11T07:58:21.342920Z","end":"2026-01-11T07:58:21.515013Z","steps":["trace[130066207] 'process raft request'  (duration: 171.983629ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-11T07:58:21.515125Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.96068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z\" limit:1 ","response":"range_response_count:1 size:4622"}
	{"level":"info","ts":"2026-01-11T07:58:21.515652Z","caller":"traceutil/trace.go:172","msg":"trace[1994663194] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z; range_end:; response_count:1; response_revision:586; }","duration":"172.067197ms","start":"2026-01-11T07:58:21.343119Z","end":"2026-01-11T07:58:21.515186Z","steps":["trace[1994663194] 'agreement among raft nodes before linearized reading'  (duration: 171.831084ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-11T07:58:21.913623Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.278798ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571767444569432784 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" mod_revision:510 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" value_size:7682 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" > >>","response":"size:16"}
	{"level":"info","ts":"2026-01-11T07:58:21.913704Z","caller":"traceutil/trace.go:172","msg":"trace[1891278321] linearizableReadLoop","detail":"{readStateIndex:619; appliedIndex:618; }","duration":"177.930741ms","start":"2026-01-11T07:58:21.735762Z","end":"2026-01-11T07:58:21.913693Z","steps":["trace[1891278321] 'read index received'  (duration: 60.609µs)","trace[1891278321] 'applied index is now lower than readState.Index'  (duration: 177.869403ms)"],"step_count":2}
	{"level":"warn","ts":"2026-01-11T07:58:21.913867Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.100206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-9n4hp\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2026-01-11T07:58:21.914030Z","caller":"traceutil/trace.go:172","msg":"trace[591670269] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"323.543214ms","start":"2026-01-11T07:58:21.590456Z","end":"2026-01-11T07:58:21.913999Z","steps":["trace[591670269] 'process raft request'  (duration: 118.495544ms)","trace[591670269] 'compare'  (duration: 204.061463ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-11T07:58:21.913946Z","caller":"traceutil/trace.go:172","msg":"trace[89422722] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-9n4hp; range_end:; response_count:1; response_revision:590; }","duration":"178.181821ms","start":"2026-01-11T07:58:21.735755Z","end":"2026-01-11T07:58:21.913937Z","steps":["trace[89422722] 'agreement among raft nodes before linearized reading'  (duration: 177.998489ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-11T07:58:21.914164Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2026-01-11T07:58:21.590428Z","time spent":"323.676965ms","remote":"127.0.0.1:43076","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7769,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" mod_revision:510 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" value_size:7682 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-585688\" > >"}
	{"level":"info","ts":"2026-01-11T07:58:33.229917Z","caller":"traceutil/trace.go:172","msg":"trace[741921538] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"129.560386ms","start":"2026-01-11T07:58:33.100309Z","end":"2026-01-11T07:58:33.229870Z","steps":["trace[741921538] 'process raft request'  (duration: 52.99275ms)","trace[741921538] 'compare'  (duration: 76.316437ms)"],"step_count":2}
	
	
	==> kernel <==
	 07:59:07 up 41 min,  0 user,  load average: 2.72, 3.32, 2.51
	Linux default-k8s-diff-port-585688 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36c8108840cbbfcbfffcbdcb463b3ce8cb8f3e93f5fe9edafb5daf5a090e8112] <==
	I0111 07:58:13.892698       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:58:13.893006       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0111 07:58:13.893342       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:58:13.893379       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:58:13.893405       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:58:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:58:14.097273       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:58:14.097320       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:58:14.097335       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:58:14.097485       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:58:14.397881       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:58:14.397911       1 metrics.go:72] Registering metrics
	I0111 07:58:14.398041       1 controller.go:711] "Syncing nftables rules"
	I0111 07:58:24.097949       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:58:24.098019       1 main.go:301] handling current node
	I0111 07:58:34.097918       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:58:34.097976       1 main.go:301] handling current node
	I0111 07:58:44.098150       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:58:44.098189       1 main.go:301] handling current node
	I0111 07:58:54.099678       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:58:54.099733       1 main.go:301] handling current node
	I0111 07:59:04.100905       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0111 07:59:04.100950       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f22f3b6659f979a42e531e71e66b83f2e5a22b700a721070ae92fe483579939] <==
	I0111 07:58:12.625146       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0111 07:58:12.625347       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:12.625424       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 07:58:12.625500       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:12.625510       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:12.626434       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 07:58:12.626471       1 aggregator.go:187] initial CRD sync complete...
	I0111 07:58:12.626485       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 07:58:12.626492       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 07:58:12.626498       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:58:12.631597       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0111 07:58:12.633969       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 07:58:12.642930       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 07:58:12.674728       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:58:12.924397       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:58:12.960765       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:58:12.984278       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:58:12.993878       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:58:13.003294       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:58:13.039510       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.59.43"}
	I0111 07:58:13.050640       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.56.189"}
	I0111 07:58:13.528258       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:58:16.215658       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:58:16.266423       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 07:58:16.465518       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a5a86114d8fe5ac425c7c94815a54aaa43fc1dc726e4e3b0be78ef6291f0b2df] <==
	I0111 07:58:15.771090       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771147       1 range_allocator.go:177] "Sending events to api server"
	I0111 07:58:15.771195       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0111 07:58:15.771201       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:58:15.771206       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771259       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771304       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771315       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771331       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771384       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771417       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771441       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771594       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771613       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771695       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.771724       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.772466       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.772512       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.776335       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.780639       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:58:15.871219       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:15.871241       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:58:15.871246       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:58:15.881166       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:16.470065       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [506df3158c5adff8c547b0ed655b3c34bf43771c0d58f5100ffa1fc8044ac26d] <==
	I0111 07:58:13.711138       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:58:13.772878       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:58:13.874018       1 shared_informer.go:377] "Caches are synced"
	I0111 07:58:13.874054       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0111 07:58:13.874154       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:58:13.894504       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:58:13.894573       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:58:13.900324       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:58:13.900617       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:58:13.900633       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:58:13.901880       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:58:13.901994       1 config.go:200] "Starting service config controller"
	I0111 07:58:13.902039       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:58:13.902010       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:58:13.901968       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:58:13.902280       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:58:13.902346       1 config.go:309] "Starting node config controller"
	I0111 07:58:13.902354       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:58:13.902360       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:58:14.002435       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:58:14.002435       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:58:14.002501       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [63e746d642de550ccfb3f5c5bee17e53f1ac0ac24283ebc2ad8f7153b15dec23] <==
	I0111 07:58:11.236674       1 serving.go:386] Generated self-signed cert in-memory
	W0111 07:58:12.560265       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 07:58:12.560303       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 07:58:12.560315       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 07:58:12.560326       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 07:58:12.595875       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 07:58:12.595916       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:58:12.599567       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 07:58:12.599600       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:58:12.599699       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 07:58:12.599842       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 07:58:12.700067       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:58:29 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:29.431911     744 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-4pb6z_kubernetes-dashboard(b661fc50-c68f-431c-a8d5-c0e1231197bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" podUID="b661fc50-c68f-431c-a8d5-c0e1231197bc"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:31.281737     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:31.281792     744 scope.go:122] "RemoveContainer" containerID="0c34f50ecf43c7aba09a03efe28f574baf9892bb7e3428221642c03ca41a9aff"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:31.368696     744 scope.go:122] "RemoveContainer" containerID="0c34f50ecf43c7aba09a03efe28f574baf9892bb7e3428221642c03ca41a9aff"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:31.368996     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:31.369033     744 scope.go:122] "RemoveContainer" containerID="242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714"
	Jan 11 07:58:31 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:31.369246     744 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-4pb6z_kubernetes-dashboard(b661fc50-c68f-431c-a8d5-c0e1231197bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" podUID="b661fc50-c68f-431c-a8d5-c0e1231197bc"
	Jan 11 07:58:39 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:39.432024     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:39 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:39.432061     744 scope.go:122] "RemoveContainer" containerID="242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714"
	Jan 11 07:58:39 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:39.432239     744 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-4pb6z_kubernetes-dashboard(b661fc50-c68f-431c-a8d5-c0e1231197bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" podUID="b661fc50-c68f-431c-a8d5-c0e1231197bc"
	Jan 11 07:58:44 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:44.404157     744 scope.go:122] "RemoveContainer" containerID="ba9522fd01ed5cb4a0478ac7b6c3b1bf98766ee9be0146de39fa388399f29772"
	Jan 11 07:58:48 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:48.742674     744 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9n4hp" containerName="coredns"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:57.281648     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:57.281704     744 scope.go:122] "RemoveContainer" containerID="242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:57.440596     744 scope.go:122] "RemoveContainer" containerID="242369a0a892123eb678ee784c6587e4592b7c6c2020bcc818a60094700dd714"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:57.440905     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:57.440942     744 scope.go:122] "RemoveContainer" containerID="5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478"
	Jan 11 07:58:57 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:57.441157     744 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-4pb6z_kubernetes-dashboard(b661fc50-c68f-431c-a8d5-c0e1231197bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" podUID="b661fc50-c68f-431c-a8d5-c0e1231197bc"
	Jan 11 07:58:59 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:59.431764     744 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" containerName="dashboard-metrics-scraper"
	Jan 11 07:58:59 default-k8s-diff-port-585688 kubelet[744]: I0111 07:58:59.431828     744 scope.go:122] "RemoveContainer" containerID="5edc90c892262436c0a2a920b53c5f04438fbac9879ebb9d336af2248627b478"
	Jan 11 07:58:59 default-k8s-diff-port-585688 kubelet[744]: E0111 07:58:59.432040     744 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-4pb6z_kubernetes-dashboard(b661fc50-c68f-431c-a8d5-c0e1231197bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4pb6z" podUID="b661fc50-c68f-431c-a8d5-c0e1231197bc"
	Jan 11 07:59:02 default-k8s-diff-port-585688 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:59:02 default-k8s-diff-port-585688 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:59:02 default-k8s-diff-port-585688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 07:59:02 default-k8s-diff-port-585688 systemd[1]: kubelet.service: Consumed 1.783s CPU time.
	
	
	==> kubernetes-dashboard [b582b197e069882b2c2b968a83d200c6e23f323a5a4f092959b23fde2b0b85d0] <==
	2026/01/11 07:58:23 Using namespace: kubernetes-dashboard
	2026/01/11 07:58:23 Using in-cluster config to connect to apiserver
	2026/01/11 07:58:23 Using secret token for csrf signing
	2026/01/11 07:58:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 07:58:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 07:58:23 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 07:58:23 Generating JWE encryption key
	2026/01/11 07:58:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 07:58:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 07:58:24 Initializing JWE encryption key from synchronized object
	2026/01/11 07:58:24 Creating in-cluster Sidecar client
	2026/01/11 07:58:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:58:24 Serving insecurely on HTTP port: 9090
	2026/01/11 07:58:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 07:58:23 Starting overwatch
	
	
	==> storage-provisioner [ba9522fd01ed5cb4a0478ac7b6c3b1bf98766ee9be0146de39fa388399f29772] <==
	I0111 07:58:13.666882       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 07:58:43.670590       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [dcee32095eab7e64a77f4e9473740b27b306a967ae964d32750fce0b53ac5fd8] <==
	I0111 07:58:44.478538       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 07:58:44.488245       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 07:58:44.488317       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 07:58:44.491376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:47.947032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:52.208063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:55.806672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:58:58.859904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:01.882391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:01.886981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:59:01.887139       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 07:59:01.887283       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-585688_2c03ff84-8d4f-4730-8f7c-604368526a2a!
	I0111 07:59:01.887280       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"06bdb24f-e9c9-497c-a246-c1be0eb5956d", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-585688_2c03ff84-8d4f-4730-8f7c-604368526a2a became leader
	W0111 07:59:01.889417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:01.893477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 07:59:01.987511       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-585688_2c03ff84-8d4f-4730-8f7c-604368526a2a!
	W0111 07:59:03.896602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:03.900791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:05.904632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 07:59:05.909415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688: exit status 2 (340.093644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-585688 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-613901 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-613901 --alsologtostderr -v=1: exit status 80 (1.750789663s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-613901 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:59:14.997592  338733 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:59:14.997918  338733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:14.997931  338733 out.go:374] Setting ErrFile to fd 2...
	I0111 07:59:14.997938  338733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:14.998255  338733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:59:14.998604  338733 out.go:368] Setting JSON to false
	I0111 07:59:14.998626  338733 mustload.go:66] Loading cluster: newest-cni-613901
	I0111 07:59:14.999135  338733 config.go:182] Loaded profile config "newest-cni-613901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:59:14.999841  338733 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:15.023295  338733 host.go:66] Checking if "newest-cni-613901" exists ...
	I0111 07:59:15.023558  338733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:59:15.089914  338733 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-11 07:59:15.077541058 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:59:15.090566  338733 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-613901 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 07:59:15.092565  338733 out.go:179] * Pausing node newest-cni-613901 ... 
	I0111 07:59:15.093788  338733 host.go:66] Checking if "newest-cni-613901" exists ...
	I0111 07:59:15.094068  338733 ssh_runner.go:195] Run: systemctl --version
	I0111 07:59:15.094120  338733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:15.112310  338733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:15.212329  338733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:59:15.224642  338733 pause.go:52] kubelet running: true
	I0111 07:59:15.224698  338733 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:59:15.363727  338733 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:59:15.363836  338733 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:59:15.428822  338733 cri.go:96] found id: "b5a7add7780eb91d02b1cb5d471c9be2b995066874ffca0b9fbabd59696a5ef7"
	I0111 07:59:15.428848  338733 cri.go:96] found id: "b50458451c17e544321f77f3c332d3986154320d5fb901d864c1c4290d29bc0f"
	I0111 07:59:15.428853  338733 cri.go:96] found id: "5fde0630295af45e89f5405104c2da9f4db5fab8eabd4bbb10a152471a695bdf"
	I0111 07:59:15.428857  338733 cri.go:96] found id: "7c6ff3c61a9404f9369402c76cb8f90e85bc10e8983eb76b28897356f832d40b"
	I0111 07:59:15.428859  338733 cri.go:96] found id: "9ac933f6049e2a09e4fb03396413211d137f8f7533b877795c11868e4bcbda45"
	I0111 07:59:15.428862  338733 cri.go:96] found id: "f2730725cc13c39e4654e49efcdc2e2e8c0f5b6de0213c0834f4b0842b6d5b2f"
	I0111 07:59:15.428865  338733 cri.go:96] found id: ""
	I0111 07:59:15.428913  338733 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:59:15.440851  338733 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:59:15Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:59:15.738475  338733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:59:15.751234  338733 pause.go:52] kubelet running: false
	I0111 07:59:15.751289  338733 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:59:15.861361  338733 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:59:15.861427  338733 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:59:15.925577  338733 cri.go:96] found id: "b5a7add7780eb91d02b1cb5d471c9be2b995066874ffca0b9fbabd59696a5ef7"
	I0111 07:59:15.925603  338733 cri.go:96] found id: "b50458451c17e544321f77f3c332d3986154320d5fb901d864c1c4290d29bc0f"
	I0111 07:59:15.925607  338733 cri.go:96] found id: "5fde0630295af45e89f5405104c2da9f4db5fab8eabd4bbb10a152471a695bdf"
	I0111 07:59:15.925610  338733 cri.go:96] found id: "7c6ff3c61a9404f9369402c76cb8f90e85bc10e8983eb76b28897356f832d40b"
	I0111 07:59:15.925613  338733 cri.go:96] found id: "9ac933f6049e2a09e4fb03396413211d137f8f7533b877795c11868e4bcbda45"
	I0111 07:59:15.925618  338733 cri.go:96] found id: "f2730725cc13c39e4654e49efcdc2e2e8c0f5b6de0213c0834f4b0842b6d5b2f"
	I0111 07:59:15.925621  338733 cri.go:96] found id: ""
	I0111 07:59:15.925683  338733 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:59:16.475656  338733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:59:16.488160  338733 pause.go:52] kubelet running: false
	I0111 07:59:16.488238  338733 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 07:59:16.600339  338733 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 07:59:16.600435  338733 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 07:59:16.669633  338733 cri.go:96] found id: "b5a7add7780eb91d02b1cb5d471c9be2b995066874ffca0b9fbabd59696a5ef7"
	I0111 07:59:16.669659  338733 cri.go:96] found id: "b50458451c17e544321f77f3c332d3986154320d5fb901d864c1c4290d29bc0f"
	I0111 07:59:16.669664  338733 cri.go:96] found id: "5fde0630295af45e89f5405104c2da9f4db5fab8eabd4bbb10a152471a695bdf"
	I0111 07:59:16.669667  338733 cri.go:96] found id: "7c6ff3c61a9404f9369402c76cb8f90e85bc10e8983eb76b28897356f832d40b"
	I0111 07:59:16.669670  338733 cri.go:96] found id: "9ac933f6049e2a09e4fb03396413211d137f8f7533b877795c11868e4bcbda45"
	I0111 07:59:16.669674  338733 cri.go:96] found id: "f2730725cc13c39e4654e49efcdc2e2e8c0f5b6de0213c0834f4b0842b6d5b2f"
	I0111 07:59:16.669676  338733 cri.go:96] found id: ""
	I0111 07:59:16.669712  338733 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 07:59:16.683603  338733 out.go:203] 
	W0111 07:59:16.684899  338733 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:59:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:59:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 07:59:16.684917  338733 out.go:285] * 
	* 
	W0111 07:59:16.686630  338733 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 07:59:16.687738  338733 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-613901 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-613901
helpers_test.go:244: (dbg) docker inspect newest-cni-613901:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40",
	        "Created": "2026-01-11T07:58:23.124097279Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335265,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:59:04.679127461Z",
	            "FinishedAt": "2026-01-11T07:59:03.786681409Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/hostname",
	        "HostsPath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/hosts",
	        "LogPath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40-json.log",
	        "Name": "/newest-cni-613901",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-613901:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-613901",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40",
	                "LowerDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-613901",
	                "Source": "/var/lib/docker/volumes/newest-cni-613901/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-613901",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-613901",
	                "name.minikube.sigs.k8s.io": "newest-cni-613901",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b6a08a0854c8f9a2ff1628f088c3033e0c724ffe2ba7a3a6dc7235ca725e1729",
	            "SandboxKey": "/var/run/docker/netns/b6a08a0854c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-613901": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf117b97d00c25a416ee9de4a3b71e11b7933ae5b129ce6cd466ea7394700035",
	                    "EndpointID": "dbb3e5335fe163416c30daa722331c5774e618549aa241febedcdf425f85ae76",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "12:ce:24:95:73:cf",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-613901",
	                        "11841d4ece70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-613901 -n newest-cni-613901
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-613901 -n newest-cni-613901: exit status 2 (338.791212ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-613901 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ pause   │ -p no-preload-875594 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ embed-certs-285966 image list --format=json                                                                                                                                                                                                   │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p embed-certs-285966 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p no-preload-875594                                                                                                                                                                                                                          │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p embed-certs-285966                                                                                                                                                                                                                         │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p no-preload-875594                                                                                                                                                                                                                          │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-gcs-476138 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-476138        │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p embed-certs-285966                                                                                                                                                                                                                         │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p test-preload-dl-gcs-476138                                                                                                                                                                                                                 │ test-preload-dl-gcs-476138        │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-github-044639 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-044639     │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p test-preload-dl-github-044639                                                                                                                                                                                                              │ test-preload-dl-github-044639     │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-787958 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-787958 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-787958                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-787958 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-613901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ stop    │ -p newest-cni-613901 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:59 UTC │
	│ image   │ default-k8s-diff-port-585688 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ pause   │ -p default-k8s-diff-port-585688 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-613901 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ delete  │ -p default-k8s-diff-port-585688                                                                                                                                                                                                               │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ delete  │ -p default-k8s-diff-port-585688                                                                                                                                                                                                               │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ image   │ newest-cni-613901 image list --format=json                                                                                                                                                                                                    │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ pause   │ -p newest-cni-613901 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:59:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:59:04.424321  334964 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:59:04.424599  334964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:04.424609  334964 out.go:374] Setting ErrFile to fd 2...
	I0111 07:59:04.424616  334964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:04.424850  334964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:59:04.425313  334964 out.go:368] Setting JSON to false
	I0111 07:59:04.426460  334964 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2492,"bootTime":1768115852,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:59:04.426513  334964 start.go:143] virtualization: kvm guest
	I0111 07:59:04.428404  334964 out.go:179] * [newest-cni-613901] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:59:04.429688  334964 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:59:04.429693  334964 notify.go:221] Checking for updates...
	I0111 07:59:04.432070  334964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:59:04.433340  334964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:59:04.434705  334964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:59:04.435864  334964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:59:04.437099  334964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:59:04.438644  334964 config.go:182] Loaded profile config "newest-cni-613901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:59:04.439184  334964 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:59:04.469255  334964 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:59:04.469400  334964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:59:04.532551  334964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-11 07:59:04.522566296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:59:04.532660  334964 docker.go:319] overlay module found
	I0111 07:59:04.534785  334964 out.go:179] * Using the docker driver based on existing profile
	I0111 07:59:04.536472  334964 start.go:309] selected driver: docker
	I0111 07:59:04.536485  334964 start.go:928] validating driver "docker" against &{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:59:04.536564  334964 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:59:04.537160  334964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:59:04.596657  334964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-11 07:59:04.586303209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:59:04.597010  334964 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 07:59:04.597044  334964 cni.go:84] Creating CNI manager for ""
	I0111 07:59:04.597131  334964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:59:04.597199  334964 start.go:353] cluster config:
	{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:59:04.599224  334964 out.go:179] * Starting "newest-cni-613901" primary control-plane node in "newest-cni-613901" cluster
	I0111 07:59:04.601496  334964 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:59:04.604261  334964 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:59:04.605369  334964 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:59:04.605402  334964 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:59:04.605417  334964 cache.go:65] Caching tarball of preloaded images
	I0111 07:59:04.605467  334964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:59:04.605487  334964 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:59:04.605494  334964 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:59:04.605578  334964 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/config.json ...
	I0111 07:59:04.627520  334964 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:59:04.627538  334964 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:59:04.627553  334964 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:59:04.627579  334964 start.go:360] acquireMachinesLock for newest-cni-613901: {Name:mk432dc96cb4574bc050bdfd86d1212df9b7472d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:59:04.627639  334964 start.go:364] duration metric: took 35.746µs to acquireMachinesLock for "newest-cni-613901"
	I0111 07:59:04.627656  334964 start.go:96] Skipping create...Using existing machine configuration
	I0111 07:59:04.627663  334964 fix.go:54] fixHost starting: 
	I0111 07:59:04.627886  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:04.649205  334964 fix.go:112] recreateIfNeeded on newest-cni-613901: state=Stopped err=<nil>
	W0111 07:59:04.649242  334964 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 07:59:04.651030  334964 out.go:252] * Restarting existing docker container for "newest-cni-613901" ...
	I0111 07:59:04.651086  334964 cli_runner.go:164] Run: docker start newest-cni-613901
	I0111 07:59:04.906273  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:04.926949  334964 kic.go:430] container "newest-cni-613901" state is running.
	I0111 07:59:04.927420  334964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-613901
	I0111 07:59:04.949540  334964 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/config.json ...
	I0111 07:59:04.949755  334964 machine.go:94] provisionDockerMachine start ...
	I0111 07:59:04.949827  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:04.970305  334964 main.go:144] libmachine: Using SSH client type: native
	I0111 07:59:04.970623  334964 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0111 07:59:04.970642  334964 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 07:59:04.971418  334964 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50664->127.0.0.1:33133: read: connection reset by peer
	I0111 07:59:08.119568  334964 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-613901
	
	I0111 07:59:08.119592  334964 ubuntu.go:182] provisioning hostname "newest-cni-613901"
	I0111 07:59:08.119661  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:08.138990  334964 main.go:144] libmachine: Using SSH client type: native
	I0111 07:59:08.139243  334964 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0111 07:59:08.139257  334964 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-613901 && echo "newest-cni-613901" | sudo tee /etc/hostname
	I0111 07:59:08.302461  334964 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-613901
	
	I0111 07:59:08.302525  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:08.321551  334964 main.go:144] libmachine: Using SSH client type: native
	I0111 07:59:08.321840  334964 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0111 07:59:08.321865  334964 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-613901' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-613901/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-613901' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 07:59:08.465677  334964 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 07:59:08.465711  334964 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-4689/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-4689/.minikube}
	I0111 07:59:08.465736  334964 ubuntu.go:190] setting up certificates
	I0111 07:59:08.465749  334964 provision.go:84] configureAuth start
	I0111 07:59:08.465827  334964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-613901
	I0111 07:59:08.485658  334964 provision.go:143] copyHostCerts
	I0111 07:59:08.485723  334964 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem, removing ...
	I0111 07:59:08.485738  334964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem
	I0111 07:59:08.485834  334964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem (1082 bytes)
	I0111 07:59:08.485990  334964 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem, removing ...
	I0111 07:59:08.486003  334964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem
	I0111 07:59:08.486047  334964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem (1123 bytes)
	I0111 07:59:08.486145  334964 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem, removing ...
	I0111 07:59:08.486156  334964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem
	I0111 07:59:08.486197  334964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem (1675 bytes)
	I0111 07:59:08.486288  334964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem org=jenkins.newest-cni-613901 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-613901]
	I0111 07:59:08.563405  334964 provision.go:177] copyRemoteCerts
	I0111 07:59:08.563468  334964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 07:59:08.563515  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:08.581934  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:08.688570  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 07:59:08.710080  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 07:59:08.729446  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 07:59:08.747200  334964 provision.go:87] duration metric: took 281.43269ms to configureAuth
	I0111 07:59:08.747232  334964 ubuntu.go:206] setting minikube options for container-runtime
	I0111 07:59:08.747407  334964 config.go:182] Loaded profile config "newest-cni-613901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:59:08.747525  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:08.765317  334964 main.go:144] libmachine: Using SSH client type: native
	I0111 07:59:08.765552  334964 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0111 07:59:08.765574  334964 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 07:59:09.061618  334964 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 07:59:09.061646  334964 machine.go:97] duration metric: took 4.111878412s to provisionDockerMachine
	I0111 07:59:09.061661  334964 start.go:293] postStartSetup for "newest-cni-613901" (driver="docker")
	I0111 07:59:09.061672  334964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 07:59:09.061723  334964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 07:59:09.061770  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:09.082123  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:09.183323  334964 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 07:59:09.186782  334964 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 07:59:09.186815  334964 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 07:59:09.186825  334964 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/addons for local assets ...
	I0111 07:59:09.186867  334964 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/files for local assets ...
	I0111 07:59:09.186952  334964 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem -> 82382.pem in /etc/ssl/certs
	I0111 07:59:09.187045  334964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 07:59:09.194295  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:59:09.211168  334964 start.go:296] duration metric: took 149.492819ms for postStartSetup
	I0111 07:59:09.211254  334964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:59:09.211359  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:09.229457  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:09.327637  334964 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 07:59:09.332016  334964 fix.go:56] duration metric: took 4.704346022s for fixHost
	I0111 07:59:09.332047  334964 start.go:83] releasing machines lock for "newest-cni-613901", held for 4.704396847s
	I0111 07:59:09.332108  334964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-613901
	I0111 07:59:09.350072  334964 ssh_runner.go:195] Run: cat /version.json
	I0111 07:59:09.350136  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:09.350179  334964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 07:59:09.350235  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:09.368179  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:09.369661  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:09.520495  334964 ssh_runner.go:195] Run: systemctl --version
	I0111 07:59:09.526908  334964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 07:59:09.561332  334964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 07:59:09.566073  334964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 07:59:09.566163  334964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 07:59:09.574262  334964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 07:59:09.574284  334964 start.go:496] detecting cgroup driver to use...
	I0111 07:59:09.574338  334964 detect.go:178] detected "systemd" cgroup driver on host os
	I0111 07:59:09.574386  334964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 07:59:09.588569  334964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 07:59:09.600737  334964 docker.go:218] disabling cri-docker service (if available) ...
	I0111 07:59:09.600792  334964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 07:59:09.615043  334964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 07:59:09.627315  334964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 07:59:09.713456  334964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 07:59:09.799092  334964 docker.go:234] disabling docker service ...
	I0111 07:59:09.799168  334964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 07:59:09.814422  334964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 07:59:09.830147  334964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 07:59:09.905258  334964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 07:59:09.989837  334964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 07:59:10.002237  334964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 07:59:10.016248  334964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 07:59:10.016297  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.025347  334964 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 07:59:10.025408  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.034093  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.043106  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.052062  334964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 07:59:10.059952  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.069972  334964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.098820  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.108970  334964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 07:59:10.116912  334964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 07:59:10.124554  334964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:59:10.223749  334964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 07:59:10.441457  334964 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 07:59:10.441531  334964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 07:59:10.445621  334964 start.go:574] Will wait 60s for crictl version
	I0111 07:59:10.445686  334964 ssh_runner.go:195] Run: which crictl
	I0111 07:59:10.449378  334964 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 07:59:10.475487  334964 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 07:59:10.475584  334964 ssh_runner.go:195] Run: crio --version
	I0111 07:59:10.502039  334964 ssh_runner.go:195] Run: crio --version
	I0111 07:59:10.534039  334964 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 07:59:10.536011  334964 cli_runner.go:164] Run: docker network inspect newest-cni-613901 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:59:10.555687  334964 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0111 07:59:10.559996  334964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:59:10.574386  334964 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0111 07:59:10.576441  334964 kubeadm.go:884] updating cluster {Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 07:59:10.576572  334964 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:59:10.576627  334964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:59:10.611129  334964 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:59:10.611153  334964 crio.go:433] Images already preloaded, skipping extraction
	I0111 07:59:10.611200  334964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:59:10.636738  334964 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:59:10.636758  334964 cache_images.go:86] Images are preloaded, skipping loading
	I0111 07:59:10.636765  334964 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I0111 07:59:10.636878  334964 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-613901 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 07:59:10.636946  334964 ssh_runner.go:195] Run: crio config
	I0111 07:59:10.686813  334964 cni.go:84] Creating CNI manager for ""
	I0111 07:59:10.686841  334964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:59:10.686859  334964 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0111 07:59:10.686913  334964 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-613901 NodeName:newest-cni-613901 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 07:59:10.687114  334964 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-613901"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 07:59:10.687186  334964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 07:59:10.696203  334964 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 07:59:10.696275  334964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 07:59:10.704132  334964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0111 07:59:10.716288  334964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 07:59:10.728886  334964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I0111 07:59:10.741061  334964 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0111 07:59:10.744448  334964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:59:10.753708  334964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:59:10.842887  334964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:59:10.865245  334964 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901 for IP: 192.168.103.2
	I0111 07:59:10.865267  334964 certs.go:195] generating shared ca certs ...
	I0111 07:59:10.865284  334964 certs.go:227] acquiring lock for ca certs: {Name:mk9f7a57f61b85f2c0f52e44b6a926769e368f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:59:10.865438  334964 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key
	I0111 07:59:10.865497  334964 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key
	I0111 07:59:10.865510  334964 certs.go:257] generating profile certs ...
	I0111 07:59:10.865631  334964 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/client.key
	I0111 07:59:10.865716  334964 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/apiserver.key.e492ee62
	I0111 07:59:10.865775  334964 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/proxy-client.key
	I0111 07:59:10.865949  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem (1338 bytes)
	W0111 07:59:10.865993  334964 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238_empty.pem, impossibly tiny 0 bytes
	I0111 07:59:10.866007  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 07:59:10.866045  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem (1082 bytes)
	I0111 07:59:10.866077  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem (1123 bytes)
	I0111 07:59:10.866113  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem (1675 bytes)
	I0111 07:59:10.866180  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:59:10.866793  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 07:59:10.887401  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 07:59:10.906250  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 07:59:10.924604  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 07:59:10.949852  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0111 07:59:10.968808  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0111 07:59:10.985918  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 07:59:11.002833  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 07:59:11.020378  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem --> /usr/share/ca-certificates/8238.pem (1338 bytes)
	I0111 07:59:11.037836  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /usr/share/ca-certificates/82382.pem (1708 bytes)
	I0111 07:59:11.055621  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 07:59:11.075411  334964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 07:59:11.088459  334964 ssh_runner.go:195] Run: openssl version
	I0111 07:59:11.095531  334964 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8238.pem
	I0111 07:59:11.102978  334964 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8238.pem /etc/ssl/certs/8238.pem
	I0111 07:59:11.110273  334964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8238.pem
	I0111 07:59:11.113995  334964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:27 /usr/share/ca-certificates/8238.pem
	I0111 07:59:11.114043  334964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8238.pem
	I0111 07:59:11.151346  334964 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 07:59:11.159413  334964 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/82382.pem
	I0111 07:59:11.167366  334964 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/82382.pem /etc/ssl/certs/82382.pem
	I0111 07:59:11.174866  334964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82382.pem
	I0111 07:59:11.179103  334964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:27 /usr/share/ca-certificates/82382.pem
	I0111 07:59:11.179160  334964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82382.pem
	I0111 07:59:11.216605  334964 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 07:59:11.224963  334964 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:59:11.232698  334964 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 07:59:11.240302  334964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:59:11.244155  334964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:23 /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:59:11.244199  334964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:59:11.279346  334964 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 07:59:11.286669  334964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 07:59:11.290285  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 07:59:11.324532  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 07:59:11.358429  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 07:59:11.398855  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 07:59:11.440349  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 07:59:11.480988  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 07:59:11.529408  334964 kubeadm.go:401] StartCluster: {Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:59:11.529486  334964 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:59:11.529543  334964 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:59:11.560122  334964 cri.go:96] found id: "5fde0630295af45e89f5405104c2da9f4db5fab8eabd4bbb10a152471a695bdf"
	I0111 07:59:11.560145  334964 cri.go:96] found id: "7c6ff3c61a9404f9369402c76cb8f90e85bc10e8983eb76b28897356f832d40b"
	I0111 07:59:11.560150  334964 cri.go:96] found id: "9ac933f6049e2a09e4fb03396413211d137f8f7533b877795c11868e4bcbda45"
	I0111 07:59:11.560153  334964 cri.go:96] found id: "f2730725cc13c39e4654e49efcdc2e2e8c0f5b6de0213c0834f4b0842b6d5b2f"
	I0111 07:59:11.560156  334964 cri.go:96] found id: ""
	I0111 07:59:11.560190  334964 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 07:59:11.571627  334964 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:59:11Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:59:11.571696  334964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 07:59:11.579342  334964 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 07:59:11.579360  334964 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 07:59:11.579403  334964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 07:59:11.586452  334964 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:59:11.586841  334964 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-613901" does not appear in /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:59:11.586959  334964 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-4689/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-613901" cluster setting kubeconfig missing "newest-cni-613901" context setting]
	I0111 07:59:11.587271  334964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:59:11.588719  334964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 07:59:11.596118  334964 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I0111 07:59:11.596143  334964 kubeadm.go:602] duration metric: took 16.77889ms to restartPrimaryControlPlane
	I0111 07:59:11.596150  334964 kubeadm.go:403] duration metric: took 66.754181ms to StartCluster
	I0111 07:59:11.596165  334964 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:59:11.596213  334964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:59:11.596713  334964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:59:11.596959  334964 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:59:11.597040  334964 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:59:11.597146  334964 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-613901"
	I0111 07:59:11.597173  334964 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-613901"
	W0111 07:59:11.597185  334964 addons.go:248] addon storage-provisioner should already be in state true
	I0111 07:59:11.597213  334964 host.go:66] Checking if "newest-cni-613901" exists ...
	I0111 07:59:11.597234  334964 config.go:182] Loaded profile config "newest-cni-613901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:59:11.597295  334964 addons.go:70] Setting dashboard=true in profile "newest-cni-613901"
	I0111 07:59:11.597312  334964 addons.go:239] Setting addon dashboard=true in "newest-cni-613901"
	W0111 07:59:11.597323  334964 addons.go:248] addon dashboard should already be in state true
	I0111 07:59:11.597345  334964 host.go:66] Checking if "newest-cni-613901" exists ...
	I0111 07:59:11.597647  334964 addons.go:70] Setting default-storageclass=true in profile "newest-cni-613901"
	I0111 07:59:11.597691  334964 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-613901"
	I0111 07:59:11.597722  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:11.597830  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:11.598056  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:11.599937  334964 out.go:179] * Verifying Kubernetes components...
	I0111 07:59:11.601352  334964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:59:11.622990  334964 addons.go:239] Setting addon default-storageclass=true in "newest-cni-613901"
	W0111 07:59:11.623068  334964 addons.go:248] addon default-storageclass should already be in state true
	I0111 07:59:11.623101  334964 host.go:66] Checking if "newest-cni-613901" exists ...
	I0111 07:59:11.623557  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:11.625239  334964 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 07:59:11.625264  334964 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:59:11.626828  334964 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:59:11.626846  334964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:59:11.626870  334964 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 07:59:11.626904  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:11.631221  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 07:59:11.631242  334964 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 07:59:11.631299  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:11.650586  334964 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:59:11.650611  334964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:59:11.650666  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:11.654223  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:11.669366  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:11.678106  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:11.743579  334964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:59:11.756230  334964 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:59:11.756301  334964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:59:11.767300  334964 api_server.go:72] duration metric: took 170.30619ms to wait for apiserver process to appear ...
	I0111 07:59:11.767325  334964 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:59:11.767352  334964 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:59:11.773847  334964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:59:11.782067  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 07:59:11.782087  334964 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 07:59:11.794512  334964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:59:11.796983  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 07:59:11.797002  334964 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 07:59:11.809240  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 07:59:11.809256  334964 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 07:59:11.823739  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 07:59:11.823815  334964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 07:59:11.837315  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 07:59:11.837336  334964 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 07:59:11.850834  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 07:59:11.850861  334964 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 07:59:11.863327  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 07:59:11.863352  334964 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 07:59:11.875938  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 07:59:11.875962  334964 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 07:59:11.888026  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 07:59:11.888045  334964 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 07:59:11.899839  334964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 07:59:12.957262  334964 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 07:59:12.957315  334964 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 07:59:12.957335  334964 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:59:12.969107  334964 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 07:59:12.969140  334964 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 07:59:13.267631  334964 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:59:13.272080  334964 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:59:13.272112  334964 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:59:13.467623  334964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.693716677s)
	I0111 07:59:13.467701  334964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.673159224s)
	I0111 07:59:13.468045  334964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.568154845s)
	I0111 07:59:13.469846  334964 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-613901 addons enable metrics-server
	
	I0111 07:59:13.478393  334964 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0111 07:59:13.479661  334964 addons.go:530] duration metric: took 1.882638361s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0111 07:59:13.767555  334964 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:59:13.771691  334964 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:59:13.771719  334964 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:59:14.267989  334964 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:59:14.273325  334964 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0111 07:59:14.274427  334964 api_server.go:141] control plane version: v1.35.0
	I0111 07:59:14.274451  334964 api_server.go:131] duration metric: took 2.507119769s to wait for apiserver health ...
	I0111 07:59:14.274459  334964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:59:14.278316  334964 system_pods.go:59] 8 kube-system pods found
	I0111 07:59:14.278359  334964 system_pods.go:61] "coredns-7d764666f9-tp2lb" [b1be6675-385f-4ec3-98c2-fd56542d3240] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0111 07:59:14.278377  334964 system_pods.go:61] "etcd-newest-cni-613901" [d53cbc3d-297b-427f-91e8-27852d932c59] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:59:14.278399  334964 system_pods.go:61] "kindnet-pvbpb" [71b2345b-ec04-4adc-8476-37a8fdd70756] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0111 07:59:14.278416  334964 system_pods.go:61] "kube-apiserver-newest-cni-613901" [3e516895-717b-41f3-894c-228f6ec7582e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:59:14.278428  334964 system_pods.go:61] "kube-controller-manager-newest-cni-613901" [d43f31c7-0834-4aa6-8679-49bd34282b2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:59:14.278437  334964 system_pods.go:61] "kube-proxy-8hjlp" [496988d3-27d9-4ba8-b70b-f8a570aaa6aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:59:14.278448  334964 system_pods.go:61] "kube-scheduler-newest-cni-613901" [f2246e87-0b37-4475-aa04-ed6117f7df38] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:59:14.278457  334964 system_pods.go:61] "storage-provisioner" [f2494dd9-c249-4496-b604-95f98c228cf5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0111 07:59:14.278468  334964 system_pods.go:74] duration metric: took 4.003054ms to wait for pod list to return data ...
	I0111 07:59:14.278479  334964 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:59:14.281207  334964 default_sa.go:45] found service account: "default"
	I0111 07:59:14.281230  334964 default_sa.go:55] duration metric: took 2.743124ms for default service account to be created ...
	I0111 07:59:14.281240  334964 kubeadm.go:587] duration metric: took 2.684251601s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 07:59:14.281256  334964 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:59:14.283954  334964 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:59:14.283976  334964 node_conditions.go:123] node cpu capacity is 8
	I0111 07:59:14.283987  334964 node_conditions.go:105] duration metric: took 2.726781ms to run NodePressure ...
	I0111 07:59:14.283996  334964 start.go:242] waiting for startup goroutines ...
	I0111 07:59:14.284002  334964 start.go:247] waiting for cluster config update ...
	I0111 07:59:14.284012  334964 start.go:256] writing updated cluster config ...
	I0111 07:59:14.284234  334964 ssh_runner.go:195] Run: rm -f paused
	I0111 07:59:14.341971  334964 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0111 07:59:14.343556  334964 out.go:179] * Done! kubectl is now configured to use "newest-cni-613901" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.238940693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.242736378Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9fdebeb4-a270-45b2-886d-7fcc4f4540d8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.244304304Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.244797001Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e9507853-7d7e-46b9-b37a-58c4619c4d04 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.244977483Z" level=info msg="Ran pod sandbox 283bdab36cefb47316aadabcd8c100fd5926e3a36dde7a566f43e9ad0fec8e31 with infra container: kube-system/kindnet-pvbpb/POD" id=9fdebeb4-a270-45b2-886d-7fcc4f4540d8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.246142266Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=114279cf-0f45-43e6-9911-c8e6c6cf224b name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.246419219Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.247192596Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=b81d65ed-8b07-4a4b-9a88-ce65a4adf87e name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.247292216Z" level=info msg="Ran pod sandbox 372e3e16163c8979411527f9e4d8ed07bf9acfb6b4e652301d6c4575d48c6ac8 with infra container: kube-system/kube-proxy-8hjlp/POD" id=e9507853-7d7e-46b9-b37a-58c4619c4d04 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.248254349Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=c25f9c33-22d8-4a71-ac1c-2a67e9c81dfd name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.248313525Z" level=info msg="Creating container: kube-system/kindnet-pvbpb/kindnet-cni" id=88c8f760-f33f-4de5-b7bd-615cda507f6e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.248396839Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.249156891Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=5933a607-5cc4-4706-8d44-f6394b3801c9 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.251351658Z" level=info msg="Creating container: kube-system/kube-proxy-8hjlp/kube-proxy" id=6f459886-b4c7-47ba-8cc8-4ab29137330b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.25147779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.2523352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.252941138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.255629843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.256128096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.278374046Z" level=info msg="Created container b50458451c17e544321f77f3c332d3986154320d5fb901d864c1c4290d29bc0f: kube-system/kindnet-pvbpb/kindnet-cni" id=88c8f760-f33f-4de5-b7bd-615cda507f6e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.279007019Z" level=info msg="Starting container: b50458451c17e544321f77f3c332d3986154320d5fb901d864c1c4290d29bc0f" id=2e9a4ddb-dfd9-48bc-b430-0e4e45ac8437 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.281008337Z" level=info msg="Started container" PID=1060 containerID=b50458451c17e544321f77f3c332d3986154320d5fb901d864c1c4290d29bc0f description=kube-system/kindnet-pvbpb/kindnet-cni id=2e9a4ddb-dfd9-48bc-b430-0e4e45ac8437 name=/runtime.v1.RuntimeService/StartContainer sandboxID=283bdab36cefb47316aadabcd8c100fd5926e3a36dde7a566f43e9ad0fec8e31
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.28163797Z" level=info msg="Created container b5a7add7780eb91d02b1cb5d471c9be2b995066874ffca0b9fbabd59696a5ef7: kube-system/kube-proxy-8hjlp/kube-proxy" id=6f459886-b4c7-47ba-8cc8-4ab29137330b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.282244138Z" level=info msg="Starting container: b5a7add7780eb91d02b1cb5d471c9be2b995066874ffca0b9fbabd59696a5ef7" id=dccfc480-bd53-4493-90a4-e0d758d7d002 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.285355379Z" level=info msg="Started container" PID=1061 containerID=b5a7add7780eb91d02b1cb5d471c9be2b995066874ffca0b9fbabd59696a5ef7 description=kube-system/kube-proxy-8hjlp/kube-proxy id=dccfc480-bd53-4493-90a4-e0d758d7d002 name=/runtime.v1.RuntimeService/StartContainer sandboxID=372e3e16163c8979411527f9e4d8ed07bf9acfb6b4e652301d6c4575d48c6ac8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b5a7add7780eb       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   3 seconds ago       Running             kube-proxy                1                   372e3e16163c8       kube-proxy-8hjlp                            kube-system
	b50458451c17e       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   3 seconds ago       Running             kindnet-cni               1                   283bdab36cefb       kindnet-pvbpb                               kube-system
	5fde0630295af       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   6 seconds ago       Running             kube-scheduler            1                   ad2f70a8b0d01       kube-scheduler-newest-cni-613901            kube-system
	7c6ff3c61a940       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   6 seconds ago       Running             kube-controller-manager   1                   7988b3e2b3469       kube-controller-manager-newest-cni-613901   kube-system
	9ac933f6049e2       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   6 seconds ago       Running             etcd                      1                   a71b119128305       etcd-newest-cni-613901                      kube-system
	f2730725cc13c       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   6 seconds ago       Running             kube-apiserver            1                   65907d5c3c234       kube-apiserver-newest-cni-613901            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-613901
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-613901
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=newest-cni-613901
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_58_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:58:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-613901
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:59:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:59:13 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:59:13 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:59:13 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 11 Jan 2026 07:59:13 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-613901
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                e49a8585-8f8b-41f4-a337-ff9547046d3e
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-613901                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-pvbpb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      34s
	  kube-system                 kube-apiserver-newest-cni-613901             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-newest-cni-613901    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-8hjlp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-newest-cni-613901             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  35s   node-controller  Node newest-cni-613901 event: Registered Node newest-cni-613901 in Controller
	  Normal  RegisteredNode  1s    node-controller  Node newest-cni-613901 event: Registered Node newest-cni-613901 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [9ac933f6049e2a09e4fb03396413211d137f8f7533b877795c11868e4bcbda45] <==
	{"level":"info","ts":"2026-01-11T07:59:11.521410Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T07:59:11.521458Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T07:59:11.521543Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-11T07:59:11.521557Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-11T07:59:11.523298Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2026-01-11T07:59:11.523478Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T07:59:11.523606Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T07:59:12.112761Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T07:59:12.112850Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:59:12.112901Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-11T07:59:12.112914Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:59:12.112932Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T07:59:12.113737Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-11T07:59:12.113774Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:59:12.113792Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T07:59:12.113819Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-11T07:59:12.114533Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:newest-cni-613901 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:59:12.114554Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:59:12.114593Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:59:12.114856Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:59:12.114894Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:59:12.115955Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:59:12.115953Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:59:12.119496Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:59:12.119523Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 07:59:17 up 41 min,  0 user,  load average: 2.78, 3.31, 2.52
	Linux newest-cni-613901 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b50458451c17e544321f77f3c332d3986154320d5fb901d864c1c4290d29bc0f] <==
	I0111 07:59:14.488750       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:59:14.541710       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0111 07:59:14.541858       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:59:14.541882       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:59:14.541893       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:59:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:59:14.743286       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:59:14.744090       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:59:14.744126       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:59:14.744429       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:59:15.047417       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:59:15.047533       1 metrics.go:72] Registering metrics
	I0111 07:59:15.047653       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [f2730725cc13c39e4654e49efcdc2e2e8c0f5b6de0213c0834f4b0842b6d5b2f] <==
	I0111 07:59:13.039549       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 07:59:13.039670       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 07:59:13.039834       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 07:59:13.039895       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 07:59:13.040090       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 07:59:13.040151       1 aggregator.go:187] initial CRD sync complete...
	I0111 07:59:13.040182       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 07:59:13.040206       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 07:59:13.040239       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:59:13.040667       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:13.040685       1 policy_source.go:248] refreshing policies
	I0111 07:59:13.044975       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 07:59:13.055375       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:59:13.061057       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:59:13.281268       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:59:13.308237       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:59:13.324789       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:59:13.331221       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:59:13.338407       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:59:13.366403       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.84.88"}
	I0111 07:59:13.376404       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.14.118"}
	I0111 07:59:13.942012       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:59:16.655987       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:59:16.753618       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:59:16.803784       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7c6ff3c61a9404f9369402c76cb8f90e85bc10e8983eb76b28897356f832d40b] <==
	I0111 07:59:16.206180       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206191       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206180       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206320       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206331       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206340       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206344       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206361       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206360       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207233       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207412       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207434       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207444       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207595       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207605       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207609       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207625       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207607       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207773       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.211398       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.216434       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:59:16.307647       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.307665       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:59:16.307670       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:59:16.316829       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [b5a7add7780eb91d02b1cb5d471c9be2b995066874ffca0b9fbabd59696a5ef7] <==
	I0111 07:59:14.325229       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:59:14.395861       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:59:14.496360       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:14.496391       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0111 07:59:14.496456       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:59:14.515399       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:59:14.515462       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:59:14.520241       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:59:14.520558       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:59:14.520587       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:59:14.521656       1 config.go:200] "Starting service config controller"
	I0111 07:59:14.521674       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:59:14.521676       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:59:14.521688       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:59:14.521661       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:59:14.521700       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:59:14.521715       1 config.go:309] "Starting node config controller"
	I0111 07:59:14.521726       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:59:14.622119       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:59:14.622152       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:59:14.622166       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:59:14.622175       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5fde0630295af45e89f5405104c2da9f4db5fab8eabd4bbb10a152471a695bdf] <==
	I0111 07:59:11.654676       1 serving.go:386] Generated self-signed cert in-memory
	W0111 07:59:12.955888       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 07:59:12.956008       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 07:59:12.956059       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 07:59:12.956090       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 07:59:12.993272       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 07:59:12.993410       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:59:12.998391       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 07:59:12.998430       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:59:12.998543       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 07:59:12.998747       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 07:59:13.098689       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.056521     678 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.056628     678 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.056666     678 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.057621     678 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.148694     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-613901\" already exists" pod="kube-system/kube-scheduler-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.148729     678 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.154269     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-613901\" already exists" pod="kube-system/etcd-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.154300     678 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.159892     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-613901\" already exists" pod="kube-system/kube-apiserver-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.159922     678 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.166025     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-613901\" already exists" pod="kube-system/kube-controller-manager-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.929242     678 apiserver.go:52] "Watching apiserver"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.934279     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-613901" containerName="kube-controller-manager"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.934629     678 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.963943     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71b2345b-ec04-4adc-8476-37a8fdd70756-xtables-lock\") pod \"kindnet-pvbpb\" (UID: \"71b2345b-ec04-4adc-8476-37a8fdd70756\") " pod="kube-system/kindnet-pvbpb"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.964028     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/71b2345b-ec04-4adc-8476-37a8fdd70756-cni-cfg\") pod \"kindnet-pvbpb\" (UID: \"71b2345b-ec04-4adc-8476-37a8fdd70756\") " pod="kube-system/kindnet-pvbpb"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.964054     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71b2345b-ec04-4adc-8476-37a8fdd70756-lib-modules\") pod \"kindnet-pvbpb\" (UID: \"71b2345b-ec04-4adc-8476-37a8fdd70756\") " pod="kube-system/kindnet-pvbpb"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.964091     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/496988d3-27d9-4ba8-b70b-f8a570aaa6aa-xtables-lock\") pod \"kube-proxy-8hjlp\" (UID: \"496988d3-27d9-4ba8-b70b-f8a570aaa6aa\") " pod="kube-system/kube-proxy-8hjlp"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.964113     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/496988d3-27d9-4ba8-b70b-f8a570aaa6aa-lib-modules\") pod \"kube-proxy-8hjlp\" (UID: \"496988d3-27d9-4ba8-b70b-f8a570aaa6aa\") " pod="kube-system/kube-proxy-8hjlp"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.984814     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-613901" containerName="etcd"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.984981     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-613901" containerName="kube-scheduler"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.985111     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-613901" containerName="kube-apiserver"
	Jan 11 07:59:15 newest-cni-613901 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:59:15 newest-cni-613901 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:59:15 newest-cni-613901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-613901 -n newest-cni-613901
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-613901 -n newest-cni-613901: exit status 2 (330.37016ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-613901 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-tp2lb storage-provisioner dashboard-metrics-scraper-867fb5f87b-fw4r6 kubernetes-dashboard-b84665fb8-zzvtx
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-613901 describe pod coredns-7d764666f9-tp2lb storage-provisioner dashboard-metrics-scraper-867fb5f87b-fw4r6 kubernetes-dashboard-b84665fb8-zzvtx
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-613901 describe pod coredns-7d764666f9-tp2lb storage-provisioner dashboard-metrics-scraper-867fb5f87b-fw4r6 kubernetes-dashboard-b84665fb8-zzvtx: exit status 1 (58.262707ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-tp2lb" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-fw4r6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-zzvtx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-613901 describe pod coredns-7d764666f9-tp2lb storage-provisioner dashboard-metrics-scraper-867fb5f87b-fw4r6 kubernetes-dashboard-b84665fb8-zzvtx: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-613901
helpers_test.go:244: (dbg) docker inspect newest-cni-613901:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40",
	        "Created": "2026-01-11T07:58:23.124097279Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335265,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T07:59:04.679127461Z",
	            "FinishedAt": "2026-01-11T07:59:03.786681409Z"
	        },
	        "Image": "sha256:9d81ad6f098dcf669510a21e6f98da4d3301079c139c22f3d7bcee050830f45e",
	        "ResolvConfPath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/hostname",
	        "HostsPath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/hosts",
	        "LogPath": "/var/lib/docker/containers/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40/11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40-json.log",
	        "Name": "/newest-cni-613901",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-613901:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-613901",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11841d4ece70946c0082b1716d9cbfed372d03bdc672f9a541df8957a6873d40",
	                "LowerDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6-init/diff:/var/lib/docker/overlay2/fbed3e2386e680328722e9ed68250a4aae1dd3b50a64848ee2ebb685fb1cc002/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24467b78c5d7726ce7b085d0dd3f727668774fcb8480ab182ba19ae8a2f0c8f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-613901",
	                "Source": "/var/lib/docker/volumes/newest-cni-613901/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-613901",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-613901",
	                "name.minikube.sigs.k8s.io": "newest-cni-613901",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b6a08a0854c8f9a2ff1628f088c3033e0c724ffe2ba7a3a6dc7235ca725e1729",
	            "SandboxKey": "/var/run/docker/netns/b6a08a0854c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-613901": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf117b97d00c25a416ee9de4a3b71e11b7933ae5b129ce6cd466ea7394700035",
	                    "EndpointID": "dbb3e5335fe163416c30daa722331c5774e618549aa241febedcdf425f85ae76",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "12:ce:24:95:73:cf",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-613901",
	                        "11841d4ece70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-613901 -n newest-cni-613901
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-613901 -n newest-cni-613901: exit status 2 (323.119041ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-613901 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ pause   │ -p no-preload-875594 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ image   │ embed-certs-285966 image list --format=json                                                                                                                                                                                                   │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ pause   │ -p embed-certs-285966 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p old-k8s-version-086314                                                                                                                                                                                                                     │ old-k8s-version-086314            │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p no-preload-875594                                                                                                                                                                                                                          │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p embed-certs-285966                                                                                                                                                                                                                         │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p no-preload-875594                                                                                                                                                                                                                          │ no-preload-875594                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-gcs-476138 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-476138        │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p embed-certs-285966                                                                                                                                                                                                                         │ embed-certs-285966                │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ delete  │ -p test-preload-dl-gcs-476138                                                                                                                                                                                                                 │ test-preload-dl-gcs-476138        │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-github-044639 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-044639     │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p test-preload-dl-github-044639                                                                                                                                                                                                              │ test-preload-dl-github-044639     │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-787958 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-787958 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-787958                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-787958 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-613901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │                     │
	│ stop    │ -p newest-cni-613901 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:58 UTC │ 11 Jan 26 07:59 UTC │
	│ image   │ default-k8s-diff-port-585688 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ pause   │ -p default-k8s-diff-port-585688 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-613901 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ start   │ -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ delete  │ -p default-k8s-diff-port-585688                                                                                                                                                                                                               │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ delete  │ -p default-k8s-diff-port-585688                                                                                                                                                                                                               │ default-k8s-diff-port-585688      │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ image   │ newest-cni-613901 image list --format=json                                                                                                                                                                                                    │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │ 11 Jan 26 07:59 UTC │
	│ pause   │ -p newest-cni-613901 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-613901                 │ jenkins │ v1.37.0 │ 11 Jan 26 07:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:59:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:59:04.424321  334964 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:59:04.424599  334964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:04.424609  334964 out.go:374] Setting ErrFile to fd 2...
	I0111 07:59:04.424616  334964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:59:04.424850  334964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:59:04.425313  334964 out.go:368] Setting JSON to false
	I0111 07:59:04.426460  334964 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2492,"bootTime":1768115852,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:59:04.426513  334964 start.go:143] virtualization: kvm guest
	I0111 07:59:04.428404  334964 out.go:179] * [newest-cni-613901] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:59:04.429688  334964 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:59:04.429693  334964 notify.go:221] Checking for updates...
	I0111 07:59:04.432070  334964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:59:04.433340  334964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:59:04.434705  334964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:59:04.435864  334964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:59:04.437099  334964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:59:04.438644  334964 config.go:182] Loaded profile config "newest-cni-613901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:59:04.439184  334964 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:59:04.469255  334964 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:59:04.469400  334964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:59:04.532551  334964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-11 07:59:04.522566296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:59:04.532660  334964 docker.go:319] overlay module found
	I0111 07:59:04.534785  334964 out.go:179] * Using the docker driver based on existing profile
	I0111 07:59:04.536472  334964 start.go:309] selected driver: docker
	I0111 07:59:04.536485  334964 start.go:928] validating driver "docker" against &{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:59:04.536564  334964 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:59:04.537160  334964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:59:04.596657  334964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-11 07:59:04.586303209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:59:04.597010  334964 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 07:59:04.597044  334964 cni.go:84] Creating CNI manager for ""
	I0111 07:59:04.597131  334964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:59:04.597199  334964 start.go:353] cluster config:
	{Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:59:04.599224  334964 out.go:179] * Starting "newest-cni-613901" primary control-plane node in "newest-cni-613901" cluster
	I0111 07:59:04.601496  334964 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:59:04.604261  334964 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:59:04.605369  334964 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:59:04.605402  334964 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:59:04.605417  334964 cache.go:65] Caching tarball of preloaded images
	I0111 07:59:04.605467  334964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:59:04.605487  334964 preload.go:251] Found /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0111 07:59:04.605494  334964 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 07:59:04.605578  334964 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/config.json ...
	I0111 07:59:04.627520  334964 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 07:59:04.627538  334964 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 07:59:04.627553  334964 cache.go:243] Successfully downloaded all kic artifacts
	I0111 07:59:04.627579  334964 start.go:360] acquireMachinesLock for newest-cni-613901: {Name:mk432dc96cb4574bc050bdfd86d1212df9b7472d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 07:59:04.627639  334964 start.go:364] duration metric: took 35.746µs to acquireMachinesLock for "newest-cni-613901"
	I0111 07:59:04.627656  334964 start.go:96] Skipping create...Using existing machine configuration
	I0111 07:59:04.627663  334964 fix.go:54] fixHost starting: 
	I0111 07:59:04.627886  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:04.649205  334964 fix.go:112] recreateIfNeeded on newest-cni-613901: state=Stopped err=<nil>
	W0111 07:59:04.649242  334964 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 07:59:04.651030  334964 out.go:252] * Restarting existing docker container for "newest-cni-613901" ...
	I0111 07:59:04.651086  334964 cli_runner.go:164] Run: docker start newest-cni-613901
	I0111 07:59:04.906273  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:04.926949  334964 kic.go:430] container "newest-cni-613901" state is running.
	I0111 07:59:04.927420  334964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-613901
	I0111 07:59:04.949540  334964 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/config.json ...
	I0111 07:59:04.949755  334964 machine.go:94] provisionDockerMachine start ...
	I0111 07:59:04.949827  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:04.970305  334964 main.go:144] libmachine: Using SSH client type: native
	I0111 07:59:04.970623  334964 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0111 07:59:04.970642  334964 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 07:59:04.971418  334964 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50664->127.0.0.1:33133: read: connection reset by peer
	I0111 07:59:08.119568  334964 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-613901
	
	I0111 07:59:08.119592  334964 ubuntu.go:182] provisioning hostname "newest-cni-613901"
	I0111 07:59:08.119661  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:08.138990  334964 main.go:144] libmachine: Using SSH client type: native
	I0111 07:59:08.139243  334964 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0111 07:59:08.139257  334964 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-613901 && echo "newest-cni-613901" | sudo tee /etc/hostname
	I0111 07:59:08.302461  334964 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-613901
	
	I0111 07:59:08.302525  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:08.321551  334964 main.go:144] libmachine: Using SSH client type: native
	I0111 07:59:08.321840  334964 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0111 07:59:08.321865  334964 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-613901' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-613901/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-613901' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 07:59:08.465677  334964 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 07:59:08.465711  334964 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-4689/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-4689/.minikube}
	I0111 07:59:08.465736  334964 ubuntu.go:190] setting up certificates
	I0111 07:59:08.465749  334964 provision.go:84] configureAuth start
	I0111 07:59:08.465827  334964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-613901
	I0111 07:59:08.485658  334964 provision.go:143] copyHostCerts
	I0111 07:59:08.485723  334964 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem, removing ...
	I0111 07:59:08.485738  334964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem
	I0111 07:59:08.485834  334964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/ca.pem (1082 bytes)
	I0111 07:59:08.485990  334964 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem, removing ...
	I0111 07:59:08.486003  334964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem
	I0111 07:59:08.486047  334964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/cert.pem (1123 bytes)
	I0111 07:59:08.486145  334964 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem, removing ...
	I0111 07:59:08.486156  334964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem
	I0111 07:59:08.486197  334964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-4689/.minikube/key.pem (1675 bytes)
	I0111 07:59:08.486288  334964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem org=jenkins.newest-cni-613901 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-613901]
	I0111 07:59:08.563405  334964 provision.go:177] copyRemoteCerts
	I0111 07:59:08.563468  334964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 07:59:08.563515  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:08.581934  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:08.688570  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 07:59:08.710080  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 07:59:08.729446  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 07:59:08.747200  334964 provision.go:87] duration metric: took 281.43269ms to configureAuth
	I0111 07:59:08.747232  334964 ubuntu.go:206] setting minikube options for container-runtime
	I0111 07:59:08.747407  334964 config.go:182] Loaded profile config "newest-cni-613901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:59:08.747525  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:08.765317  334964 main.go:144] libmachine: Using SSH client type: native
	I0111 07:59:08.765552  334964 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0111 07:59:08.765574  334964 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 07:59:09.061618  334964 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 07:59:09.061646  334964 machine.go:97] duration metric: took 4.111878412s to provisionDockerMachine
	I0111 07:59:09.061661  334964 start.go:293] postStartSetup for "newest-cni-613901" (driver="docker")
	I0111 07:59:09.061672  334964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 07:59:09.061723  334964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 07:59:09.061770  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:09.082123  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:09.183323  334964 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 07:59:09.186782  334964 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 07:59:09.186815  334964 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 07:59:09.186825  334964 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/addons for local assets ...
	I0111 07:59:09.186867  334964 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-4689/.minikube/files for local assets ...
	I0111 07:59:09.186952  334964 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem -> 82382.pem in /etc/ssl/certs
	I0111 07:59:09.187045  334964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 07:59:09.194295  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:59:09.211168  334964 start.go:296] duration metric: took 149.492819ms for postStartSetup
	I0111 07:59:09.211254  334964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:59:09.211359  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:09.229457  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:09.327637  334964 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 07:59:09.332016  334964 fix.go:56] duration metric: took 4.704346022s for fixHost
	I0111 07:59:09.332047  334964 start.go:83] releasing machines lock for "newest-cni-613901", held for 4.704396847s
	I0111 07:59:09.332108  334964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-613901
	I0111 07:59:09.350072  334964 ssh_runner.go:195] Run: cat /version.json
	I0111 07:59:09.350136  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:09.350179  334964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 07:59:09.350235  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:09.368179  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:09.369661  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:09.520495  334964 ssh_runner.go:195] Run: systemctl --version
	I0111 07:59:09.526908  334964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 07:59:09.561332  334964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 07:59:09.566073  334964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 07:59:09.566163  334964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 07:59:09.574262  334964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 07:59:09.574284  334964 start.go:496] detecting cgroup driver to use...
	I0111 07:59:09.574338  334964 detect.go:178] detected "systemd" cgroup driver on host os
	I0111 07:59:09.574386  334964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 07:59:09.588569  334964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 07:59:09.600737  334964 docker.go:218] disabling cri-docker service (if available) ...
	I0111 07:59:09.600792  334964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 07:59:09.615043  334964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 07:59:09.627315  334964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 07:59:09.713456  334964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 07:59:09.799092  334964 docker.go:234] disabling docker service ...
	I0111 07:59:09.799168  334964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 07:59:09.814422  334964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 07:59:09.830147  334964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 07:59:09.905258  334964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 07:59:09.989837  334964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 07:59:10.002237  334964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 07:59:10.016248  334964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 07:59:10.016297  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.025347  334964 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 07:59:10.025408  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.034093  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.043106  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.052062  334964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 07:59:10.059952  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.069972  334964 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.098820  334964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 07:59:10.108970  334964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 07:59:10.116912  334964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 07:59:10.124554  334964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:59:10.223749  334964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 07:59:10.441457  334964 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 07:59:10.441531  334964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 07:59:10.445621  334964 start.go:574] Will wait 60s for crictl version
	I0111 07:59:10.445686  334964 ssh_runner.go:195] Run: which crictl
	I0111 07:59:10.449378  334964 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 07:59:10.475487  334964 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 07:59:10.475584  334964 ssh_runner.go:195] Run: crio --version
	I0111 07:59:10.502039  334964 ssh_runner.go:195] Run: crio --version
	I0111 07:59:10.534039  334964 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 07:59:10.536011  334964 cli_runner.go:164] Run: docker network inspect newest-cni-613901 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 07:59:10.555687  334964 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0111 07:59:10.559996  334964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:59:10.574386  334964 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0111 07:59:10.576441  334964 kubeadm.go:884] updating cluster {Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 07:59:10.576572  334964 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 07:59:10.576627  334964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:59:10.611129  334964 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:59:10.611153  334964 crio.go:433] Images already preloaded, skipping extraction
	I0111 07:59:10.611200  334964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 07:59:10.636738  334964 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 07:59:10.636758  334964 cache_images.go:86] Images are preloaded, skipping loading
	I0111 07:59:10.636765  334964 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I0111 07:59:10.636878  334964 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-613901 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 07:59:10.636946  334964 ssh_runner.go:195] Run: crio config
	I0111 07:59:10.686813  334964 cni.go:84] Creating CNI manager for ""
	I0111 07:59:10.686841  334964 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:59:10.686859  334964 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0111 07:59:10.686913  334964 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-613901 NodeName:newest-cni-613901 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 07:59:10.687114  334964 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-613901"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 07:59:10.687186  334964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 07:59:10.696203  334964 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 07:59:10.696275  334964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 07:59:10.704132  334964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0111 07:59:10.716288  334964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 07:59:10.728886  334964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I0111 07:59:10.741061  334964 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0111 07:59:10.744448  334964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 07:59:10.753708  334964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:59:10.842887  334964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:59:10.865245  334964 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901 for IP: 192.168.103.2
	I0111 07:59:10.865267  334964 certs.go:195] generating shared ca certs ...
	I0111 07:59:10.865284  334964 certs.go:227] acquiring lock for ca certs: {Name:mk9f7a57f61b85f2c0f52e44b6a926769e368f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:59:10.865438  334964 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key
	I0111 07:59:10.865497  334964 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key
	I0111 07:59:10.865510  334964 certs.go:257] generating profile certs ...
	I0111 07:59:10.865631  334964 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/client.key
	I0111 07:59:10.865716  334964 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/apiserver.key.e492ee62
	I0111 07:59:10.865775  334964 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/proxy-client.key
	I0111 07:59:10.865949  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem (1338 bytes)
	W0111 07:59:10.865993  334964 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238_empty.pem, impossibly tiny 0 bytes
	I0111 07:59:10.866007  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 07:59:10.866045  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/ca.pem (1082 bytes)
	I0111 07:59:10.866077  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/cert.pem (1123 bytes)
	I0111 07:59:10.866113  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/certs/key.pem (1675 bytes)
	I0111 07:59:10.866180  334964 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem (1708 bytes)
	I0111 07:59:10.866793  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 07:59:10.887401  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 07:59:10.906250  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 07:59:10.924604  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 07:59:10.949852  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0111 07:59:10.968808  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0111 07:59:10.985918  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 07:59:11.002833  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/newest-cni-613901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 07:59:11.020378  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/certs/8238.pem --> /usr/share/ca-certificates/8238.pem (1338 bytes)
	I0111 07:59:11.037836  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/ssl/certs/82382.pem --> /usr/share/ca-certificates/82382.pem (1708 bytes)
	I0111 07:59:11.055621  334964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 07:59:11.075411  334964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 07:59:11.088459  334964 ssh_runner.go:195] Run: openssl version
	I0111 07:59:11.095531  334964 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8238.pem
	I0111 07:59:11.102978  334964 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8238.pem /etc/ssl/certs/8238.pem
	I0111 07:59:11.110273  334964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8238.pem
	I0111 07:59:11.113995  334964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:27 /usr/share/ca-certificates/8238.pem
	I0111 07:59:11.114043  334964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8238.pem
	I0111 07:59:11.151346  334964 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 07:59:11.159413  334964 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/82382.pem
	I0111 07:59:11.167366  334964 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/82382.pem /etc/ssl/certs/82382.pem
	I0111 07:59:11.174866  334964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82382.pem
	I0111 07:59:11.179103  334964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:27 /usr/share/ca-certificates/82382.pem
	I0111 07:59:11.179160  334964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82382.pem
	I0111 07:59:11.216605  334964 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 07:59:11.224963  334964 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:59:11.232698  334964 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 07:59:11.240302  334964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:59:11.244155  334964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:23 /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:59:11.244199  334964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 07:59:11.279346  334964 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 07:59:11.286669  334964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 07:59:11.290285  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 07:59:11.324532  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 07:59:11.358429  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 07:59:11.398855  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 07:59:11.440349  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 07:59:11.480988  334964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 07:59:11.529408  334964 kubeadm.go:401] StartCluster: {Name:newest-cni-613901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-613901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:59:11.529486  334964 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 07:59:11.529543  334964 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 07:59:11.560122  334964 cri.go:96] found id: "5fde0630295af45e89f5405104c2da9f4db5fab8eabd4bbb10a152471a695bdf"
	I0111 07:59:11.560145  334964 cri.go:96] found id: "7c6ff3c61a9404f9369402c76cb8f90e85bc10e8983eb76b28897356f832d40b"
	I0111 07:59:11.560150  334964 cri.go:96] found id: "9ac933f6049e2a09e4fb03396413211d137f8f7533b877795c11868e4bcbda45"
	I0111 07:59:11.560153  334964 cri.go:96] found id: "f2730725cc13c39e4654e49efcdc2e2e8c0f5b6de0213c0834f4b0842b6d5b2f"
	I0111 07:59:11.560156  334964 cri.go:96] found id: ""
	I0111 07:59:11.560190  334964 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 07:59:11.571627  334964 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:59:11Z" level=error msg="open /run/runc: no such file or directory"
	I0111 07:59:11.571696  334964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 07:59:11.579342  334964 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 07:59:11.579360  334964 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 07:59:11.579403  334964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 07:59:11.586452  334964 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 07:59:11.586841  334964 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-613901" does not appear in /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:59:11.586959  334964 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-4689/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-613901" cluster setting kubeconfig missing "newest-cni-613901" context setting]
	I0111 07:59:11.587271  334964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:59:11.588719  334964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 07:59:11.596118  334964 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I0111 07:59:11.596143  334964 kubeadm.go:602] duration metric: took 16.77889ms to restartPrimaryControlPlane
	I0111 07:59:11.596150  334964 kubeadm.go:403] duration metric: took 66.754181ms to StartCluster
	I0111 07:59:11.596165  334964 settings.go:142] acquiring lock: {Name:mk091111a06dcfa678353e028948ce400417c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:59:11.596213  334964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:59:11.596713  334964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/kubeconfig: {Name:mkf02a7b755a7b9c0efbafb355649086e2a25e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:59:11.596959  334964 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 07:59:11.597040  334964 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 07:59:11.597146  334964 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-613901"
	I0111 07:59:11.597173  334964 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-613901"
	W0111 07:59:11.597185  334964 addons.go:248] addon storage-provisioner should already be in state true
	I0111 07:59:11.597213  334964 host.go:66] Checking if "newest-cni-613901" exists ...
	I0111 07:59:11.597234  334964 config.go:182] Loaded profile config "newest-cni-613901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:59:11.597295  334964 addons.go:70] Setting dashboard=true in profile "newest-cni-613901"
	I0111 07:59:11.597312  334964 addons.go:239] Setting addon dashboard=true in "newest-cni-613901"
	W0111 07:59:11.597323  334964 addons.go:248] addon dashboard should already be in state true
	I0111 07:59:11.597345  334964 host.go:66] Checking if "newest-cni-613901" exists ...
	I0111 07:59:11.597647  334964 addons.go:70] Setting default-storageclass=true in profile "newest-cni-613901"
	I0111 07:59:11.597691  334964 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-613901"
	I0111 07:59:11.597722  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:11.597830  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:11.598056  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:11.599937  334964 out.go:179] * Verifying Kubernetes components...
	I0111 07:59:11.601352  334964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 07:59:11.622990  334964 addons.go:239] Setting addon default-storageclass=true in "newest-cni-613901"
	W0111 07:59:11.623068  334964 addons.go:248] addon default-storageclass should already be in state true
	I0111 07:59:11.623101  334964 host.go:66] Checking if "newest-cni-613901" exists ...
	I0111 07:59:11.623557  334964 cli_runner.go:164] Run: docker container inspect newest-cni-613901 --format={{.State.Status}}
	I0111 07:59:11.625239  334964 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 07:59:11.625264  334964 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 07:59:11.626828  334964 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:59:11.626846  334964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 07:59:11.626870  334964 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 07:59:11.626904  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:11.631221  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 07:59:11.631242  334964 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 07:59:11.631299  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:11.650586  334964 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 07:59:11.650611  334964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 07:59:11.650666  334964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-613901
	I0111 07:59:11.654223  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:11.669366  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:11.678106  334964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/newest-cni-613901/id_rsa Username:docker}
	I0111 07:59:11.743579  334964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 07:59:11.756230  334964 api_server.go:52] waiting for apiserver process to appear ...
	I0111 07:59:11.756301  334964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:59:11.767300  334964 api_server.go:72] duration metric: took 170.30619ms to wait for apiserver process to appear ...
	I0111 07:59:11.767325  334964 api_server.go:88] waiting for apiserver healthz status ...
	I0111 07:59:11.767352  334964 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:59:11.773847  334964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 07:59:11.782067  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 07:59:11.782087  334964 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 07:59:11.794512  334964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 07:59:11.796983  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 07:59:11.797002  334964 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 07:59:11.809240  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 07:59:11.809256  334964 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 07:59:11.823739  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 07:59:11.823815  334964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 07:59:11.837315  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 07:59:11.837336  334964 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 07:59:11.850834  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 07:59:11.850861  334964 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 07:59:11.863327  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 07:59:11.863352  334964 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 07:59:11.875938  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 07:59:11.875962  334964 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 07:59:11.888026  334964 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 07:59:11.888045  334964 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 07:59:11.899839  334964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 07:59:12.957262  334964 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 07:59:12.957315  334964 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 07:59:12.957335  334964 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:59:12.969107  334964 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 07:59:12.969140  334964 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 07:59:13.267631  334964 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:59:13.272080  334964 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:59:13.272112  334964 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:59:13.467623  334964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.693716677s)
	I0111 07:59:13.467701  334964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.673159224s)
	I0111 07:59:13.468045  334964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.568154845s)
	I0111 07:59:13.469846  334964 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-613901 addons enable metrics-server
	
	I0111 07:59:13.478393  334964 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0111 07:59:13.479661  334964 addons.go:530] duration metric: took 1.882638361s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0111 07:59:13.767555  334964 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:59:13.771691  334964 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 07:59:13.771719  334964 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 07:59:14.267989  334964 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0111 07:59:14.273325  334964 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0111 07:59:14.274427  334964 api_server.go:141] control plane version: v1.35.0
	I0111 07:59:14.274451  334964 api_server.go:131] duration metric: took 2.507119769s to wait for apiserver health ...
	I0111 07:59:14.274459  334964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 07:59:14.278316  334964 system_pods.go:59] 8 kube-system pods found
	I0111 07:59:14.278359  334964 system_pods.go:61] "coredns-7d764666f9-tp2lb" [b1be6675-385f-4ec3-98c2-fd56542d3240] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0111 07:59:14.278377  334964 system_pods.go:61] "etcd-newest-cni-613901" [d53cbc3d-297b-427f-91e8-27852d932c59] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 07:59:14.278399  334964 system_pods.go:61] "kindnet-pvbpb" [71b2345b-ec04-4adc-8476-37a8fdd70756] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0111 07:59:14.278416  334964 system_pods.go:61] "kube-apiserver-newest-cni-613901" [3e516895-717b-41f3-894c-228f6ec7582e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 07:59:14.278428  334964 system_pods.go:61] "kube-controller-manager-newest-cni-613901" [d43f31c7-0834-4aa6-8679-49bd34282b2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 07:59:14.278437  334964 system_pods.go:61] "kube-proxy-8hjlp" [496988d3-27d9-4ba8-b70b-f8a570aaa6aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 07:59:14.278448  334964 system_pods.go:61] "kube-scheduler-newest-cni-613901" [f2246e87-0b37-4475-aa04-ed6117f7df38] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 07:59:14.278457  334964 system_pods.go:61] "storage-provisioner" [f2494dd9-c249-4496-b604-95f98c228cf5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0111 07:59:14.278468  334964 system_pods.go:74] duration metric: took 4.003054ms to wait for pod list to return data ...
	I0111 07:59:14.278479  334964 default_sa.go:34] waiting for default service account to be created ...
	I0111 07:59:14.281207  334964 default_sa.go:45] found service account: "default"
	I0111 07:59:14.281230  334964 default_sa.go:55] duration metric: took 2.743124ms for default service account to be created ...
	I0111 07:59:14.281240  334964 kubeadm.go:587] duration metric: took 2.684251601s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 07:59:14.281256  334964 node_conditions.go:102] verifying NodePressure condition ...
	I0111 07:59:14.283954  334964 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0111 07:59:14.283976  334964 node_conditions.go:123] node cpu capacity is 8
	I0111 07:59:14.283987  334964 node_conditions.go:105] duration metric: took 2.726781ms to run NodePressure ...
	I0111 07:59:14.283996  334964 start.go:242] waiting for startup goroutines ...
	I0111 07:59:14.284002  334964 start.go:247] waiting for cluster config update ...
	I0111 07:59:14.284012  334964 start.go:256] writing updated cluster config ...
	I0111 07:59:14.284234  334964 ssh_runner.go:195] Run: rm -f paused
	I0111 07:59:14.341971  334964 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0111 07:59:14.343556  334964 out.go:179] * Done! kubectl is now configured to use "newest-cni-613901" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.238940693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.242736378Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9fdebeb4-a270-45b2-886d-7fcc4f4540d8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.244304304Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.244797001Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e9507853-7d7e-46b9-b37a-58c4619c4d04 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.244977483Z" level=info msg="Ran pod sandbox 283bdab36cefb47316aadabcd8c100fd5926e3a36dde7a566f43e9ad0fec8e31 with infra container: kube-system/kindnet-pvbpb/POD" id=9fdebeb4-a270-45b2-886d-7fcc4f4540d8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.246142266Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=114279cf-0f45-43e6-9911-c8e6c6cf224b name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.246419219Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.247192596Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=b81d65ed-8b07-4a4b-9a88-ce65a4adf87e name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.247292216Z" level=info msg="Ran pod sandbox 372e3e16163c8979411527f9e4d8ed07bf9acfb6b4e652301d6c4575d48c6ac8 with infra container: kube-system/kube-proxy-8hjlp/POD" id=e9507853-7d7e-46b9-b37a-58c4619c4d04 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.248254349Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=c25f9c33-22d8-4a71-ac1c-2a67e9c81dfd name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.248313525Z" level=info msg="Creating container: kube-system/kindnet-pvbpb/kindnet-cni" id=88c8f760-f33f-4de5-b7bd-615cda507f6e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.248396839Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.249156891Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=5933a607-5cc4-4706-8d44-f6394b3801c9 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.251351658Z" level=info msg="Creating container: kube-system/kube-proxy-8hjlp/kube-proxy" id=6f459886-b4c7-47ba-8cc8-4ab29137330b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.25147779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.2523352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.252941138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.255629843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.256128096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.278374046Z" level=info msg="Created container b50458451c17e544321f77f3c332d3986154320d5fb901d864c1c4290d29bc0f: kube-system/kindnet-pvbpb/kindnet-cni" id=88c8f760-f33f-4de5-b7bd-615cda507f6e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.279007019Z" level=info msg="Starting container: b50458451c17e544321f77f3c332d3986154320d5fb901d864c1c4290d29bc0f" id=2e9a4ddb-dfd9-48bc-b430-0e4e45ac8437 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.281008337Z" level=info msg="Started container" PID=1060 containerID=b50458451c17e544321f77f3c332d3986154320d5fb901d864c1c4290d29bc0f description=kube-system/kindnet-pvbpb/kindnet-cni id=2e9a4ddb-dfd9-48bc-b430-0e4e45ac8437 name=/runtime.v1.RuntimeService/StartContainer sandboxID=283bdab36cefb47316aadabcd8c100fd5926e3a36dde7a566f43e9ad0fec8e31
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.28163797Z" level=info msg="Created container b5a7add7780eb91d02b1cb5d471c9be2b995066874ffca0b9fbabd59696a5ef7: kube-system/kube-proxy-8hjlp/kube-proxy" id=6f459886-b4c7-47ba-8cc8-4ab29137330b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.282244138Z" level=info msg="Starting container: b5a7add7780eb91d02b1cb5d471c9be2b995066874ffca0b9fbabd59696a5ef7" id=dccfc480-bd53-4493-90a4-e0d758d7d002 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 07:59:14 newest-cni-613901 crio[526]: time="2026-01-11T07:59:14.285355379Z" level=info msg="Started container" PID=1061 containerID=b5a7add7780eb91d02b1cb5d471c9be2b995066874ffca0b9fbabd59696a5ef7 description=kube-system/kube-proxy-8hjlp/kube-proxy id=dccfc480-bd53-4493-90a4-e0d758d7d002 name=/runtime.v1.RuntimeService/StartContainer sandboxID=372e3e16163c8979411527f9e4d8ed07bf9acfb6b4e652301d6c4575d48c6ac8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b5a7add7780eb       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   5 seconds ago       Running             kube-proxy                1                   372e3e16163c8       kube-proxy-8hjlp                            kube-system
	b50458451c17e       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   5 seconds ago       Running             kindnet-cni               1                   283bdab36cefb       kindnet-pvbpb                               kube-system
	5fde0630295af       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   7 seconds ago       Running             kube-scheduler            1                   ad2f70a8b0d01       kube-scheduler-newest-cni-613901            kube-system
	7c6ff3c61a940       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   7 seconds ago       Running             kube-controller-manager   1                   7988b3e2b3469       kube-controller-manager-newest-cni-613901   kube-system
	9ac933f6049e2       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   7 seconds ago       Running             etcd                      1                   a71b119128305       etcd-newest-cni-613901                      kube-system
	f2730725cc13c       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   7 seconds ago       Running             kube-apiserver            1                   65907d5c3c234       kube-apiserver-newest-cni-613901            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-613901
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-613901
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=newest-cni-613901
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T07_58_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 07:58:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-613901
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 07:59:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 07:59:13 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 07:59:13 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 07:59:13 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 11 Jan 2026 07:59:13 +0000   Sun, 11 Jan 2026 07:58:34 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-613901
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a608e996fda0d6a8d0aefd69620b69
	  System UUID:                e49a8585-8f8b-41f4-a337-ff9547046d3e
	  Boot ID:                    bf5ef4fd-1e5e-4242-b522-48b308ed11f1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-613901                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-pvbpb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      36s
	  kube-system                 kube-apiserver-newest-cni-613901             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-newest-cni-613901    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-8hjlp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-scheduler-newest-cni-613901             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  37s   node-controller  Node newest-cni-613901 event: Registered Node newest-cni-613901 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-613901 event: Registered Node newest-cni-613901 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[  +7.744805] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[  +3.804668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 cc 68 e4 77 08 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 27 60 f0 e8 33 08 06
	[ +13.685163] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 63 58 3f ae 06 08 06
	[  +0.000359] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ab 39 92 a9 51 08 06
	[ +15.294069] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fd 79 5e 2e 8a 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 a7 42 30 c5 33 08 06
	[  +0.001094] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 49 4e 78 31 d8 08 06
	[Jan11 07:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e 72 0a 69 e2 8c 08 06
	[  +0.129588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	[ +23.675508] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 b4 a7 bf 59 ba 08 06
	[  +0.000409] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 31 8d 74 d3 50 08 06
	
	
	==> etcd [9ac933f6049e2a09e4fb03396413211d137f8f7533b877795c11868e4bcbda45] <==
	{"level":"info","ts":"2026-01-11T07:59:11.521410Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T07:59:11.521458Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T07:59:11.521543Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-11T07:59:11.521557Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-11T07:59:11.523298Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2026-01-11T07:59:11.523478Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T07:59:11.523606Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T07:59:12.112761Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T07:59:12.112850Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T07:59:12.112901Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-11T07:59:12.112914Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:59:12.112932Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T07:59:12.113737Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-11T07:59:12.113774Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T07:59:12.113792Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T07:59:12.113819Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-11T07:59:12.114533Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:newest-cni-613901 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T07:59:12.114554Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:59:12.114593Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T07:59:12.114856Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T07:59:12.114894Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T07:59:12.115955Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:59:12.115953Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T07:59:12.119496Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T07:59:12.119523Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 07:59:19 up 41 min,  0 user,  load average: 2.72, 3.29, 2.51
	Linux newest-cni-613901 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b50458451c17e544321f77f3c332d3986154320d5fb901d864c1c4290d29bc0f] <==
	I0111 07:59:14.488750       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 07:59:14.541710       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0111 07:59:14.541858       1 main.go:148] setting mtu 1500 for CNI 
	I0111 07:59:14.541882       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 07:59:14.541893       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T07:59:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 07:59:14.743286       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 07:59:14.744090       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 07:59:14.744126       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 07:59:14.744429       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 07:59:15.047417       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 07:59:15.047533       1 metrics.go:72] Registering metrics
	I0111 07:59:15.047653       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [f2730725cc13c39e4654e49efcdc2e2e8c0f5b6de0213c0834f4b0842b6d5b2f] <==
	I0111 07:59:13.039549       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 07:59:13.039670       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 07:59:13.039834       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 07:59:13.039895       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 07:59:13.040090       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 07:59:13.040151       1 aggregator.go:187] initial CRD sync complete...
	I0111 07:59:13.040182       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 07:59:13.040206       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 07:59:13.040239       1 cache.go:39] Caches are synced for autoregister controller
	I0111 07:59:13.040667       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:13.040685       1 policy_source.go:248] refreshing policies
	I0111 07:59:13.044975       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 07:59:13.055375       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 07:59:13.061057       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 07:59:13.281268       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 07:59:13.308237       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 07:59:13.324789       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 07:59:13.331221       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 07:59:13.338407       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 07:59:13.366403       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.84.88"}
	I0111 07:59:13.376404       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.14.118"}
	I0111 07:59:13.942012       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 07:59:16.655987       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 07:59:16.753618       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 07:59:16.803784       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7c6ff3c61a9404f9369402c76cb8f90e85bc10e8983eb76b28897356f832d40b] <==
	I0111 07:59:16.206180       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206191       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206180       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206320       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206331       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206340       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206344       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206361       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.206360       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207233       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207412       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207434       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207444       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207595       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207605       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207609       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207625       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207607       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.207773       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.211398       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.216434       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:59:16.307647       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:16.307665       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 07:59:16.307670       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 07:59:16.316829       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [b5a7add7780eb91d02b1cb5d471c9be2b995066874ffca0b9fbabd59696a5ef7] <==
	I0111 07:59:14.325229       1 server_linux.go:53] "Using iptables proxy"
	I0111 07:59:14.395861       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:59:14.496360       1 shared_informer.go:377] "Caches are synced"
	I0111 07:59:14.496391       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0111 07:59:14.496456       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 07:59:14.515399       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 07:59:14.515462       1 server_linux.go:136] "Using iptables Proxier"
	I0111 07:59:14.520241       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 07:59:14.520558       1 server.go:529] "Version info" version="v1.35.0"
	I0111 07:59:14.520587       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:59:14.521656       1 config.go:200] "Starting service config controller"
	I0111 07:59:14.521674       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 07:59:14.521676       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 07:59:14.521688       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 07:59:14.521661       1 config.go:106] "Starting endpoint slice config controller"
	I0111 07:59:14.521700       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 07:59:14.521715       1 config.go:309] "Starting node config controller"
	I0111 07:59:14.521726       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 07:59:14.622119       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 07:59:14.622152       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 07:59:14.622166       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 07:59:14.622175       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5fde0630295af45e89f5405104c2da9f4db5fab8eabd4bbb10a152471a695bdf] <==
	I0111 07:59:11.654676       1 serving.go:386] Generated self-signed cert in-memory
	W0111 07:59:12.955888       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 07:59:12.956008       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 07:59:12.956059       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 07:59:12.956090       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 07:59:12.993272       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 07:59:12.993410       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 07:59:12.998391       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 07:59:12.998430       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 07:59:12.998543       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 07:59:12.998747       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 07:59:13.098689       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.056521     678 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.056628     678 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.056666     678 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.057621     678 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.148694     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-613901\" already exists" pod="kube-system/kube-scheduler-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.148729     678 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.154269     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-613901\" already exists" pod="kube-system/etcd-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.154300     678 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.159892     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-613901\" already exists" pod="kube-system/kube-apiserver-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.159922     678 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.166025     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-613901\" already exists" pod="kube-system/kube-controller-manager-newest-cni-613901"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.929242     678 apiserver.go:52] "Watching apiserver"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.934279     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-613901" containerName="kube-controller-manager"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.934629     678 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.963943     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71b2345b-ec04-4adc-8476-37a8fdd70756-xtables-lock\") pod \"kindnet-pvbpb\" (UID: \"71b2345b-ec04-4adc-8476-37a8fdd70756\") " pod="kube-system/kindnet-pvbpb"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.964028     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/71b2345b-ec04-4adc-8476-37a8fdd70756-cni-cfg\") pod \"kindnet-pvbpb\" (UID: \"71b2345b-ec04-4adc-8476-37a8fdd70756\") " pod="kube-system/kindnet-pvbpb"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.964054     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71b2345b-ec04-4adc-8476-37a8fdd70756-lib-modules\") pod \"kindnet-pvbpb\" (UID: \"71b2345b-ec04-4adc-8476-37a8fdd70756\") " pod="kube-system/kindnet-pvbpb"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.964091     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/496988d3-27d9-4ba8-b70b-f8a570aaa6aa-xtables-lock\") pod \"kube-proxy-8hjlp\" (UID: \"496988d3-27d9-4ba8-b70b-f8a570aaa6aa\") " pod="kube-system/kube-proxy-8hjlp"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: I0111 07:59:13.964113     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/496988d3-27d9-4ba8-b70b-f8a570aaa6aa-lib-modules\") pod \"kube-proxy-8hjlp\" (UID: \"496988d3-27d9-4ba8-b70b-f8a570aaa6aa\") " pod="kube-system/kube-proxy-8hjlp"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.984814     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-613901" containerName="etcd"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.984981     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-613901" containerName="kube-scheduler"
	Jan 11 07:59:13 newest-cni-613901 kubelet[678]: E0111 07:59:13.985111     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-613901" containerName="kube-apiserver"
	Jan 11 07:59:15 newest-cni-613901 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 07:59:15 newest-cni-613901 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 07:59:15 newest-cni-613901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-613901 -n newest-cni-613901
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-613901 -n newest-cni-613901: exit status 2 (331.231831ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-613901 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-tp2lb storage-provisioner dashboard-metrics-scraper-867fb5f87b-fw4r6 kubernetes-dashboard-b84665fb8-zzvtx
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-613901 describe pod coredns-7d764666f9-tp2lb storage-provisioner dashboard-metrics-scraper-867fb5f87b-fw4r6 kubernetes-dashboard-b84665fb8-zzvtx
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-613901 describe pod coredns-7d764666f9-tp2lb storage-provisioner dashboard-metrics-scraper-867fb5f87b-fw4r6 kubernetes-dashboard-b84665fb8-zzvtx: exit status 1 (60.983994ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-tp2lb" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-fw4r6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-zzvtx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-613901 describe pod coredns-7d764666f9-tp2lb storage-provisioner dashboard-metrics-scraper-867fb5f87b-fw4r6 kubernetes-dashboard-b84665fb8-zzvtx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.22s)

                                                
                                    

Test pass (279/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.01
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 2.49
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.07
18 TestDownloadOnly/v1.35.0/DeleteAll 0.22
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.8
22 TestOffline 60.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 90.47
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 7.4
48 TestAddons/StoppedEnableDisable 16.68
49 TestCertOptions 23.29
50 TestCertExpiration 207.19
52 TestForceSystemdFlag 20.08
53 TestForceSystemdEnv 22.51
58 TestErrorSpam/setup 19.2
59 TestErrorSpam/start 0.64
60 TestErrorSpam/status 0.96
61 TestErrorSpam/pause 5.23
62 TestErrorSpam/unpause 5.62
63 TestErrorSpam/stop 12.61
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 35.49
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.14
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.47
75 TestFunctional/serial/CacheCmd/cache/add_local 0.89
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.5
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 52.89
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.19
86 TestFunctional/serial/LogsFileCmd 1.2
87 TestFunctional/serial/InvalidService 3.6
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 7.55
91 TestFunctional/parallel/DryRun 0.49
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.01
97 TestFunctional/parallel/ServiceCmdConnect 22.68
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 35.33
101 TestFunctional/parallel/SSHCmd 0.58
102 TestFunctional/parallel/CpCmd 1.81
103 TestFunctional/parallel/MySQL 39
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 1.88
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
113 TestFunctional/parallel/License 0.41
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 22.24
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.24
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
129 TestFunctional/parallel/MountCmd/any-port 7.82
130 TestFunctional/parallel/ServiceCmd/List 0.93
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.92
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
133 TestFunctional/parallel/ServiceCmd/Format 0.6
134 TestFunctional/parallel/MountCmd/specific-port 2.12
135 TestFunctional/parallel/ServiceCmd/URL 0.57
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.84
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.53
142 TestFunctional/parallel/ImageCommands/Setup 0.34
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
144 TestFunctional/parallel/Version/short 0.08
145 TestFunctional/parallel/Version/components 0.64
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.59
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 101.64
163 TestMultiControlPlane/serial/DeployApp 4.23
164 TestMultiControlPlane/serial/PingHostFromPods 1
165 TestMultiControlPlane/serial/AddWorkerNode 23.7
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
168 TestMultiControlPlane/serial/CopyFile 17.13
169 TestMultiControlPlane/serial/StopSecondaryNode 13.32
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.51
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 107.6
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.71
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
176 TestMultiControlPlane/serial/StopCluster 47.71
177 TestMultiControlPlane/serial/RestartCluster 51.99
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
179 TestMultiControlPlane/serial/AddSecondaryNode 43.94
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
185 TestJSONOutput/start/Command 35.99
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 16.16
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 27.07
211 TestKicCustomNetwork/use_default_bridge_network 19.25
212 TestKicExistingNetwork 19.7
213 TestKicCustomSubnet 23.31
214 TestKicStaticIP 21.34
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 44.17
219 TestMountStart/serial/StartWithMountFirst 7.81
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 7.8
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.66
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.35
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 60.17
231 TestMultiNode/serial/DeployApp2Nodes 3.32
232 TestMultiNode/serial/PingHostFrom2Pods 0.7
233 TestMultiNode/serial/AddNode 26.14
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.67
236 TestMultiNode/serial/CopyFile 9.98
237 TestMultiNode/serial/StopNode 2.3
238 TestMultiNode/serial/StartAfterStop 7.21
239 TestMultiNode/serial/RestartKeepsNodes 82.69
240 TestMultiNode/serial/DeleteNode 6.04
241 TestMultiNode/serial/StopMultiNode 30.37
242 TestMultiNode/serial/RestartMultiNode 47.45
243 TestMultiNode/serial/ValidateNameConflict 22.2
250 TestScheduledStopUnix 95.25
253 TestInsufficientStorage 11.72
254 TestRunningBinaryUpgrade 71.4
256 TestKubernetesUpgrade 332.04
257 TestMissingContainerUpgrade 68.42
258 TestStoppedBinaryUpgrade/Setup 0.55
260 TestPause/serial/Start 58.51
261 TestStoppedBinaryUpgrade/Upgrade 305.03
262 TestPause/serial/SecondStartNoReconfiguration 6.19
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
266 TestNoKubernetes/serial/StartWithK8s 20.4
267 TestNoKubernetes/serial/StartWithStopK8s 24.06
268 TestNoKubernetes/serial/Start 6.58
269 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
271 TestNoKubernetes/serial/ProfileList 36.96
279 TestNetworkPlugins/group/false 3.35
283 TestNoKubernetes/serial/Stop 1.29
284 TestNoKubernetes/serial/StartNoArgs 6.6
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
293 TestPreload/Start-NoPreload-PullImage 49.62
294 TestPreload/Restart-With-Preload-Check-User-Image 50.34
295 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
296 TestNetworkPlugins/group/auto/Start 42.02
298 TestNetworkPlugins/group/kindnet/Start 39.87
299 TestNetworkPlugins/group/auto/KubeletFlags 0.29
300 TestNetworkPlugins/group/auto/NetCatPod 8.2
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/auto/DNS 0.1
303 TestNetworkPlugins/group/auto/Localhost 0.09
304 TestNetworkPlugins/group/auto/HairPin 0.08
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
306 TestNetworkPlugins/group/kindnet/NetCatPod 8.18
307 TestNetworkPlugins/group/kindnet/DNS 0.12
308 TestNetworkPlugins/group/kindnet/Localhost 0.09
309 TestNetworkPlugins/group/kindnet/HairPin 0.1
310 TestNetworkPlugins/group/calico/Start 50.81
311 TestNetworkPlugins/group/custom-flannel/Start 49.36
312 TestNetworkPlugins/group/enable-default-cni/Start 65.02
313 TestNetworkPlugins/group/flannel/Start 42.23
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.23
316 TestNetworkPlugins/group/calico/ControllerPod 6.01
317 TestNetworkPlugins/group/calico/KubeletFlags 0.31
318 TestNetworkPlugins/group/calico/NetCatPod 8.18
319 TestNetworkPlugins/group/flannel/ControllerPod 6.01
320 TestNetworkPlugins/group/custom-flannel/DNS 0.13
321 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
322 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
324 TestNetworkPlugins/group/flannel/NetCatPod 8.18
325 TestNetworkPlugins/group/calico/DNS 0.15
326 TestNetworkPlugins/group/calico/Localhost 0.11
327 TestNetworkPlugins/group/calico/HairPin 0.11
328 TestNetworkPlugins/group/flannel/DNS 0.12
329 TestNetworkPlugins/group/flannel/Localhost 0.1
330 TestNetworkPlugins/group/flannel/HairPin 0.11
331 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
332 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.27
333 TestNetworkPlugins/group/bridge/Start 44.38
334 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
335 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
336 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
338 TestStartStop/group/old-k8s-version/serial/FirstStart 50.46
340 TestStartStop/group/no-preload/serial/FirstStart 50.95
342 TestStartStop/group/embed-certs/serial/FirstStart 39.44
343 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
344 TestNetworkPlugins/group/bridge/NetCatPod 9.37
345 TestNetworkPlugins/group/bridge/DNS 0.11
346 TestNetworkPlugins/group/bridge/Localhost 0.09
347 TestNetworkPlugins/group/bridge/HairPin 0.09
348 TestStartStop/group/old-k8s-version/serial/DeployApp 8.26
350 TestStartStop/group/no-preload/serial/DeployApp 7.26
351 TestStartStop/group/old-k8s-version/serial/Stop 16.23
352 TestStartStop/group/embed-certs/serial/DeployApp 9.21
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.31
356 TestStartStop/group/no-preload/serial/Stop 16.38
358 TestStartStop/group/embed-certs/serial/Stop 16.25
359 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
360 TestStartStop/group/old-k8s-version/serial/SecondStart 48.1
361 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
362 TestStartStop/group/no-preload/serial/SecondStart 47.6
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
364 TestStartStop/group/embed-certs/serial/SecondStart 43.28
365 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.29
367 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.24
368 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
369 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
370 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
371 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 47.34
374 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
376 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
378 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
380 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
383 TestStartStop/group/newest-cni/serial/FirstStart 27.31
384 TestPreload/PreloadSrc/gcs 4.14
385 TestPreload/PreloadSrc/github 4.48
386 TestPreload/PreloadSrc/gcs-cached 0.5
387 TestStartStop/group/newest-cni/serial/DeployApp 0
389 TestStartStop/group/newest-cni/serial/Stop 17.95
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
394 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
395 TestStartStop/group/newest-cni/serial/SecondStart 10.33
396 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
397 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
398 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (8.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-928847 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-928847 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.012262985s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0111 07:23:25.615608    8238 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0111 07:23:25.615684    8238 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-928847
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-928847: exit status 85 (69.844454ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-928847 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-928847 │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:23:17
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:23:17.655265    8250 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:23:17.655508    8250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:23:17.655518    8250 out.go:374] Setting ErrFile to fd 2...
	I0111 07:23:17.655523    8250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:23:17.655695    8250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	W0111 07:23:17.655820    8250 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22402-4689/.minikube/config/config.json: open /home/jenkins/minikube-integration/22402-4689/.minikube/config/config.json: no such file or directory
	I0111 07:23:17.656282    8250 out.go:368] Setting JSON to true
	I0111 07:23:17.657123    8250 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":346,"bootTime":1768115852,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:23:17.657188    8250 start.go:143] virtualization: kvm guest
	I0111 07:23:17.661312    8250 out.go:99] [download-only-928847] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0111 07:23:17.661435    8250 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball: no such file or directory
	I0111 07:23:17.661480    8250 notify.go:221] Checking for updates...
	I0111 07:23:17.662720    8250 out.go:171] MINIKUBE_LOCATION=22402
	I0111 07:23:17.663946    8250 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:23:17.665086    8250 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:23:17.666228    8250 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:23:17.667341    8250 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0111 07:23:17.669410    8250 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0111 07:23:17.669636    8250 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:23:17.693406    8250 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:23:17.693468    8250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:23:17.908781    8250 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2026-01-11 07:23:17.898282512 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:23:17.908921    8250 docker.go:319] overlay module found
	I0111 07:23:17.910338    8250 out.go:99] Using the docker driver based on user configuration
	I0111 07:23:17.910363    8250 start.go:309] selected driver: docker
	I0111 07:23:17.910369    8250 start.go:928] validating driver "docker" against <nil>
	I0111 07:23:17.910449    8250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:23:17.964876    8250 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2026-01-11 07:23:17.95637363 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:23:17.965100    8250 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:23:17.965731    8250 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0111 07:23:17.965970    8250 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 07:23:17.967706    8250 out.go:171] Using Docker driver with root privileges
	I0111 07:23:17.968978    8250 cni.go:84] Creating CNI manager for ""
	I0111 07:23:17.969039    8250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 07:23:17.969048    8250 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 07:23:17.969103    8250 start.go:353] cluster config:
	{Name:download-only-928847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-928847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:23:17.970399    8250 out.go:99] Starting "download-only-928847" primary control-plane node in "download-only-928847" cluster
	I0111 07:23:17.970414    8250 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 07:23:17.971702    8250 out.go:99] Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:23:17.971730    8250 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 07:23:17.971774    8250 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:23:17.987425    8250 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 07:23:17.987614    8250 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory
	I0111 07:23:17.987727    8250 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 07:23:17.991587    8250 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:23:17.991605    8250 cache.go:65] Caching tarball of preloaded images
	I0111 07:23:17.991711    8250 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 07:23:17.993479    8250 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0111 07:23:17.993508    8250 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:23:17.993514    8250 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I0111 07:23:18.016440    8250 preload.go:313] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I0111 07:23:18.016573    8250 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0111 07:23:20.713256    8250 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0111 07:23:20.713590    8250 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/download-only-928847/config.json ...
	I0111 07:23:20.713618    8250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/download-only-928847/config.json: {Name:mkac1f856fca1f2438dd079e36f252088a0ed1dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:20.713776    8250 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 07:23:20.713959    8250 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22402-4689/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-928847 host does not exist
	  To start a cluster, run: "minikube start -p download-only-928847"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-928847
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (2.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-913516 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-913516 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.492765655s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (2.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I0111 07:23:28.539009    8238 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I0111 07:23:28.539046    8238 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-913516
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-913516: exit status 85 (69.922695ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-928847 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-928847 │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ delete  │ -p download-only-928847                                                                                                                                                   │ download-only-928847 │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ start   │ -o=json --download-only -p download-only-913516 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-913516 │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:23:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:23:26.097171    8607 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:23:26.097291    8607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:23:26.097302    8607 out.go:374] Setting ErrFile to fd 2...
	I0111 07:23:26.097307    8607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:23:26.097516    8607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:23:26.098030    8607 out.go:368] Setting JSON to true
	I0111 07:23:26.098851    8607 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":354,"bootTime":1768115852,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:23:26.098910    8607 start.go:143] virtualization: kvm guest
	I0111 07:23:26.100516    8607 out.go:99] [download-only-913516] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:23:26.100678    8607 notify.go:221] Checking for updates...
	I0111 07:23:26.101753    8607 out.go:171] MINIKUBE_LOCATION=22402
	I0111 07:23:26.103005    8607 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:23:26.104129    8607 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:23:26.105269    8607 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:23:26.106346    8607 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0111 07:23:26.108385    8607 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0111 07:23:26.108644    8607 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:23:26.132245    8607 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:23:26.132364    8607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:23:26.189701    8607 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2026-01-11 07:23:26.180606841 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:23:26.189814    8607 docker.go:319] overlay module found
	I0111 07:23:26.191676    8607 out.go:99] Using the docker driver based on user configuration
	I0111 07:23:26.191702    8607 start.go:309] selected driver: docker
	I0111 07:23:26.191707    8607 start.go:928] validating driver "docker" against <nil>
	I0111 07:23:26.191793    8607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:23:26.249498    8607 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2026-01-11 07:23:26.239443312 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:23:26.249683    8607 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:23:26.250238    8607 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0111 07:23:26.250383    8607 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 07:23:26.251895    8607 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-913516 host does not exist
	  To start a cluster, run: "minikube start -p download-only-913516"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-913516
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-564084 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-564084" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-564084
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I0111 07:23:29.634346    8238 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-682885 --alsologtostderr --binary-mirror http://127.0.0.1:38307 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-682885" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-682885
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (60.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-966080 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-966080 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (58.476726529s)
helpers_test.go:176: Cleaning up "offline-crio-966080" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-966080
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-966080: (2.483440933s)
--- PASS: TestOffline (60.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-171536
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-171536: exit status 85 (59.680743ms)

                                                
                                                
-- stdout --
	* Profile "addons-171536" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-171536"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-171536
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-171536: exit status 85 (64.574597ms)

                                                
                                                
-- stdout --
	* Profile "addons-171536" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-171536"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (90.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-171536 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-171536 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m30.472531766s)
--- PASS: TestAddons/Setup (90.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-171536 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-171536 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-171536 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-171536 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d1a88d8f-0c64-49a1-b3dc-8959539b0a56] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d1a88d8f-0c64-49a1-b3dc-8959539b0a56] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003616432s
addons_test.go:696: (dbg) Run:  kubectl --context addons-171536 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-171536 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-171536 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.68s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-171536
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-171536: (16.394029677s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-171536
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-171536
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-171536
--- PASS: TestAddons/StoppedEnableDisable (16.68s)

                                                
                                    
x
+
TestCertOptions (23.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-523362 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-523362 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (20.175555878s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-523362 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-523362 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-523362 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-523362" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-523362
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-523362: (2.42284124s)
--- PASS: TestCertOptions (23.29s)

                                                
                                    
x
+
TestCertExpiration (207.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-587864 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-587864 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (18.797589619s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-587864 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-587864 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.518359345s)
helpers_test.go:176: Cleaning up "cert-expiration-587864" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-587864
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-587864: (2.87284686s)
--- PASS: TestCertExpiration (207.19s)

                                                
                                    
x
+
TestForceSystemdFlag (20.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-252908 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-252908 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (17.262108715s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-252908 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-252908" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-252908
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-252908: (2.511427194s)
--- PASS: TestForceSystemdFlag (20.08s)

                                                
                                    
x
+
TestForceSystemdEnv (22.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-413119 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-413119 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.07706021s)
helpers_test.go:176: Cleaning up "force-systemd-env-413119" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-413119
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-413119: (2.435850439s)
--- PASS: TestForceSystemdEnv (22.51s)

                                                
                                    
x
+
TestErrorSpam/setup (19.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-435583 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-435583 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-435583 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-435583 --driver=docker  --container-runtime=crio: (19.200579146s)
--- PASS: TestErrorSpam/setup (19.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (5.23s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 pause: exit status 80 (1.478696714s)

                                                
                                                
-- stdout --
	* Pausing node nospam-435583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:26:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 pause: exit status 80 (2.110481517s)

                                                
                                                
-- stdout --
	* Pausing node nospam-435583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:26:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 pause: exit status 80 (1.641563171s)

                                                
                                                
-- stdout --
	* Pausing node nospam-435583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:26:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.23s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 unpause: exit status 80 (1.744957646s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-435583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:26:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 unpause: exit status 80 (2.104123695s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-435583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:26:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 unpause: exit status 80 (1.774748001s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-435583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T07:26:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.62s)

                                                
                                    
x
+
TestErrorSpam/stop (12.61s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 stop: (12.410323077s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-435583 --log_dir /tmp/nospam-435583 stop
--- PASS: TestErrorSpam/stop (12.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22402-4689/.minikube/files/etc/test/nested/copy/8238/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (35.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-148131 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-148131 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (35.492939705s)
--- PASS: TestFunctional/serial/StartWithProxy (35.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0111 07:27:42.216860    8238 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-148131 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-148131 --alsologtostderr -v=8: (6.137753961s)
functional_test.go:678: soft start took 6.138423618s for "functional-148131" cluster.
I0111 07:27:48.357151    8238 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (6.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-148131 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-148131 /tmp/TestFunctionalserialCacheCmdcacheadd_local803811883/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 cache add minikube-local-cache-test:functional-148131
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 cache delete minikube-local-cache-test:functional-148131
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-148131
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-148131 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.770786ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 kubectl -- --context functional-148131 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-148131 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-148131 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-148131 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.89349498s)
functional_test.go:776: restart took 52.8936144s for "functional-148131" cluster.
I0111 07:28:47.000782    8238 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (52.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-148131 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-amd64 -p functional-148131 logs: (1.186086223s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 logs --file /tmp/TestFunctionalserialLogsFileCmd4123992918/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-148131 logs --file /tmp/TestFunctionalserialLogsFileCmd4123992918/001/logs.txt: (1.203665347s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-148131 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-148131
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-148131: exit status 115 (349.032634ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32384 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-148131 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-148131 config get cpus: exit status 14 (84.351272ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-148131 config get cpus: exit status 14 (70.462834ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-148131 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-148131 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 45105: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-148131 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-148131 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (218.400277ms)

                                                
                                                
-- stdout --
	* [functional-148131] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:29:32.199609   46007 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:29:32.199898   46007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:29:32.199911   46007 out.go:374] Setting ErrFile to fd 2...
	I0111 07:29:32.199918   46007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:29:32.200166   46007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:29:32.200715   46007 out.go:368] Setting JSON to false
	I0111 07:29:32.202069   46007 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":720,"bootTime":1768115852,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:29:32.202127   46007 start.go:143] virtualization: kvm guest
	I0111 07:29:32.204059   46007 out.go:179] * [functional-148131] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:29:32.206272   46007 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:29:32.206286   46007 notify.go:221] Checking for updates...
	I0111 07:29:32.210171   46007 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:29:32.212621   46007 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:29:32.214063   46007 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:29:32.215424   46007 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:29:32.217416   46007 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:29:32.219733   46007 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:29:32.220418   46007 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:29:32.252552   46007 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:29:32.252650   46007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:29:32.329571   46007 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2026-01-11 07:29:32.318407148 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:29:32.329688   46007 docker.go:319] overlay module found
	I0111 07:29:32.331584   46007 out.go:179] * Using the docker driver based on existing profile
	I0111 07:29:32.333273   46007 start.go:309] selected driver: docker
	I0111 07:29:32.333292   46007 start.go:928] validating driver "docker" against &{Name:functional-148131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-148131 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:29:32.333407   46007 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:29:32.337064   46007 out.go:203] 
	W0111 07:29:32.340439   46007 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0111 07:29:32.341773   46007 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-148131 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-148131 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-148131 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (170.597511ms)

                                                
                                                
-- stdout --
	* [functional-148131] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:29:28.891772   44332 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:29:28.891888   44332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:29:28.891894   44332 out.go:374] Setting ErrFile to fd 2...
	I0111 07:29:28.891897   44332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:29:28.892234   44332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:29:28.892634   44332 out.go:368] Setting JSON to false
	I0111 07:29:28.893591   44332 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":717,"bootTime":1768115852,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:29:28.893647   44332 start.go:143] virtualization: kvm guest
	I0111 07:29:28.895988   44332 out.go:179] * [functional-148131] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0111 07:29:28.897301   44332 notify.go:221] Checking for updates...
	I0111 07:29:28.897350   44332 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:29:28.898849   44332 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:29:28.901244   44332 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:29:28.902619   44332 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:29:28.903815   44332 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:29:28.905089   44332 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:29:28.906845   44332 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:29:28.907562   44332 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:29:28.934005   44332 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:29:28.934117   44332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:29:28.991538   44332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2026-01-11 07:29:28.981096551 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:29:28.991626   44332 docker.go:319] overlay module found
	I0111 07:29:28.993481   44332 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0111 07:29:28.994695   44332 start.go:309] selected driver: docker
	I0111 07:29:28.994709   44332 start.go:928] validating driver "docker" against &{Name:functional-148131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-148131 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:29:28.994781   44332 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:29:28.996528   44332 out.go:203] 
	W0111 07:29:28.998733   44332 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0111 07:29:29.000029   44332 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-148131 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-148131 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-q5grb" [d4b861a8-9986-48ce-869e-a74538d40282] Pending
helpers_test.go:353: "hello-node-connect-5d95464fd4-q5grb" [d4b861a8-9986-48ce-869e-a74538d40282] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-q5grb" [d4b861a8-9986-48ce-869e-a74538d40282] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.003330818s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30333
functional_test.go:1685: http://192.168.49.2:30333: success! body:
Request served by hello-node-connect-5d95464fd4-q5grb

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30333
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [2dd30511-e9d9-4971-9a7c-afde62bbba9e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003862701s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-148131 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-148131 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-148131 get pvc myclaim -o=json
I0111 07:29:00.164168    8238 retry.go:84] will retry after 2.4s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:8abf909c-0beb-40a0-962c-b116e2d8de67 ResourceVersion:644 Generation:0 CreationTimestamp:2026-01-11 07:29:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b5eb70 VolumeMode:0xc001b5eb80 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-148131 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-148131 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-148131 apply -f testdata/storage-provisioner/pod.yaml
I0111 07:29:06.165569    8238 detect.go:211] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7e7abfe7-d73f-4d7a-ab2c-a7057724f107] Pending
helpers_test.go:353: "sp-pod" [7e7abfe7-d73f-4d7a-ab2c-a7057724f107] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [7e7abfe7-d73f-4d7a-ab2c-a7057724f107] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.006488719s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-148131 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-148131 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-148131 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [2d5fad1e-f2bd-4982-a6f9-4071f256c9e7] Pending
helpers_test.go:353: "sp-pod" [2d5fad1e-f2bd-4982-a6f9-4071f256c9e7] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003726519s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-148131 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.33s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh -n functional-148131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 cp functional-148131:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4288768774/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh -n functional-148131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh -n functional-148131 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-148131 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-sf6c4" [31967396-03ad-4da2-b159-22f479f66321] Pending
helpers_test.go:353: "mysql-7d7b65bc95-sf6c4" [31967396-03ad-4da2-b159-22f479f66321] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-sf6c4" [31967396-03ad-4da2-b159-22f479f66321] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.003423248s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-148131 exec mysql-7d7b65bc95-sf6c4 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-148131 exec mysql-7d7b65bc95-sf6c4 -- mysql -ppassword -e "show databases;": exit status 1 (129.137526ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0111 07:29:23.407780    8238 retry.go:84] will retry after 600ms: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-148131 exec mysql-7d7b65bc95-sf6c4 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-148131 exec mysql-7d7b65bc95-sf6c4 -- mysql -ppassword -e "show databases;": exit status 1 (129.262223ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0111 07:29:24.175069    8238 detect.go:211] nested VM detected
functional_test.go:1817: (dbg) Run:  kubectl --context functional-148131 exec mysql-7d7b65bc95-sf6c4 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-148131 exec mysql-7d7b65bc95-sf6c4 -- mysql -ppassword -e "show databases;": exit status 1 (98.235184ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-148131 exec mysql-7d7b65bc95-sf6c4 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-148131 exec mysql-7d7b65bc95-sf6c4 -- mysql -ppassword -e "show databases;": exit status 1 (115.814468ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-148131 exec mysql-7d7b65bc95-sf6c4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (39.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/8238/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "sudo cat /etc/test/nested/copy/8238/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/8238.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "sudo cat /etc/ssl/certs/8238.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/8238.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "sudo cat /usr/share/ca-certificates/8238.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/82382.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "sudo cat /etc/ssl/certs/82382.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/82382.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "sudo cat /usr/share/ca-certificates/82382.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-148131 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-148131 ssh "sudo systemctl is-active docker": exit status 1 (331.42308ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-148131 ssh "sudo systemctl is-active containerd": exit status 1 (337.70112ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-148131 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-148131 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-148131 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 39151: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-148131 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-148131 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-148131 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [a4bf905c-641d-4dff-ae74-278e5ab12ac5] Pending
helpers_test.go:353: "nginx-svc" [a4bf905c-641d-4dff-ae74-278e5ab12ac5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [a4bf905c-641d-4dff-ae74-278e5ab12ac5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 22.004307307s
I0111 07:29:16.803725    8238 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-148131 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.195.243 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-148131 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-148131 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-148131 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-2snfj" [3746fed7-58a7-42e2-b5ff-9a5acf035e81] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-2snfj" [3746fed7-58a7-42e2-b5ff-9a5acf035e81] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003354583s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "354.164684ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "65.855181ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "343.172423ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "58.893607ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-148131 /tmp/TestFunctionalparallelMountCmdany-port659180942/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1768116559280684752" to /tmp/TestFunctionalparallelMountCmdany-port659180942/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1768116559280684752" to /tmp/TestFunctionalparallelMountCmdany-port659180942/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1768116559280684752" to /tmp/TestFunctionalparallelMountCmdany-port659180942/001/test-1768116559280684752
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-148131 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.954067ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0111 07:29:19.563925    8238 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 11 07:29 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 11 07:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 11 07:29 test-1768116559280684752
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh cat /mount-9p/test-1768116559280684752
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-148131 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [2495dcbf-78ce-4422-b120-53b1e33cd64d] Pending
helpers_test.go:353: "busybox-mount" [2495dcbf-78ce-4422-b120-53b1e33cd64d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [2495dcbf-78ce-4422-b120-53b1e33cd64d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [2495dcbf-78ce-4422-b120-53b1e33cd64d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003814453s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-148131 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-148131 /tmp/TestFunctionalparallelMountCmdany-port659180942/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 service list -o json
functional_test.go:1509: Took "920.543379ms" to run "out/minikube-linux-amd64 -p functional-148131 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30103
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-148131 /tmp/TestFunctionalparallelMountCmdspecific-port3241870188/001:/mount-9p --alsologtostderr -v=1 --port 35867]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-148131 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (305.598139ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-148131 /tmp/TestFunctionalparallelMountCmdspecific-port3241870188/001:/mount-9p --alsologtostderr -v=1 --port 35867] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-148131 ssh "sudo umount -f /mount-9p": exit status 1 (290.667153ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-148131 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-148131 /tmp/TestFunctionalparallelMountCmdspecific-port3241870188/001:/mount-9p --alsologtostderr -v=1 --port 35867] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30103
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-148131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4058201714/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-148131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4058201714/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-148131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4058201714/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-148131 ssh "findmnt -T" /mount1: exit status 1 (343.256437ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-148131 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-148131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4058201714/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-148131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4058201714/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-148131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4058201714/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-148131 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-148131
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-148131 image ls --format short --alsologtostderr:
I0111 07:29:36.306748   47309 out.go:360] Setting OutFile to fd 1 ...
I0111 07:29:36.306850   47309 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:29:36.306856   47309 out.go:374] Setting ErrFile to fd 2...
I0111 07:29:36.306864   47309 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:29:36.307166   47309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
I0111 07:29:36.307831   47309 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 07:29:36.307939   47309 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 07:29:36.308390   47309 cli_runner.go:164] Run: docker container inspect functional-148131 --format={{.State.Status}}
I0111 07:29:36.331753   47309 ssh_runner.go:195] Run: systemctl --version
I0111 07:29:36.331818   47309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-148131
I0111 07:29:36.354198   47309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/functional-148131/id_rsa Username:docker}
I0111 07:29:36.456607   47309 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
E0111 07:29:36.490214   47309 logFile.go:53] failed to close the audit log: invalid argument
W0111 07:29:36.490246   47309 root.go:91] failed to log command end to audit: failed to convert logs to rows: failed to unmarshal "{\"specversion\":\"1.0\",\"id\":\"e4eb2ed4-2fd5-4681-a94c-04468c49369e\",\"source\":\"https://minikube.sigs.k8s.io/\",\"type\":\"io.k8s.sigs.minikube.audit\",\"datacontenttype\":\"application/json\",\"data\":{\"args\":\"functional-148131 ssh sudo crictl images\",\"command\":\"ssh\",\"endTime\":\"11 Jan 26 07:27 UTC\"": unexpected end of JSON input
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-148131 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ 5c6acd67e9cd1 │ 90.8MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 2c9a4b058bd7e │ 76.9MB │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test               │ functional-148131                     │ d3a266a042abc │ 3.33kB │
│ public.ecr.aws/docker/library/mysql               │ 8.4                                   │ 54c6e074ef93c │ 804MB  │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ b9d44994d8add │ 63.3MB │
│ registry.k8s.io/pause                             │ latest                                │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-148131                     │ 9056ab77afb8e │ 4.94MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ 32652ff1bbe6b │ 72MB   │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ 550794e3b12ac │ 52.8MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                             │ 3.3                                   │ 0184c1613d929 │ 686kB  │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-148131 image ls --format table --alsologtostderr:
I0111 07:29:36.554273   47461 out.go:360] Setting OutFile to fd 1 ...
I0111 07:29:36.554514   47461 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:29:36.554530   47461 out.go:374] Setting ErrFile to fd 2...
I0111 07:29:36.554534   47461 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:29:36.554796   47461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
I0111 07:29:36.555413   47461 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 07:29:36.555518   47461 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 07:29:36.555983   47461 cli_runner.go:164] Run: docker container inspect functional-148131 --format={{.State.Status}}
I0111 07:29:36.574627   47461 ssh_runner.go:195] Run: systemctl --version
I0111 07:29:36.574665   47461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-148131
I0111 07:29:36.594711   47461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/functional-148131/id_rsa Username:docker}
I0111 07:29:36.697613   47461 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-148131 image ls --format json --alsologtostderr:
[{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"90844140"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029","docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e
98765c"],"repoTags":[],"size":"43824855"},{"id":"54c6e074ef93c709bfd8e76a38f54a65e9b5a38d25c9cf82e2633a21f89cd009","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:615302383ec847282233669b4c18396aa075b1279ff7729af0dcd99784361659","public.ecr.aws/docker/library/mysql@sha256:90544b3775490579867a30988d48f0215fc3b88d78d8d62b2c0d96ee9226a2b7"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803768460"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111","registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"76893520"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["r
egistry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998","gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisi
oner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"52763986"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"492
1d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27","docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"b9d44994d8adde234cc849b6518ae39e786c40b1a7c9cc1de674fb3e7f913fc2","repoDigests":["public.ecr.aws/nginx/nginx@sha256:92e3aff70715f47c5c05580bbe7ed66cb0625814e71b8885ccdbb6d89496f87f","public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"63312028"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":["registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbc
b34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"71986585"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"si
ze":"4944818"},{"id":"d3a266a042abc2c8df00d301d5bc381ca8f30abe9df654d8fc4554e400d8e5d1","repoDigests":["localhost/minikube-local-cache-test@sha256:a6516142b059536f5fa99a6b97131268d9934eb354f8dc2f76a02b32f8f340d3"],"repoTags":["localhost/minikube-local-cache-test:functional-148131"],"size":"3330"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-148131 image ls --format json --alsologtostderr:
I0111 07:29:36.307225   47308 out.go:360] Setting OutFile to fd 1 ...
I0111 07:29:36.307315   47308 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:29:36.307321   47308 out.go:374] Setting ErrFile to fd 2...
I0111 07:29:36.307328   47308 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:29:36.307794   47308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
I0111 07:29:36.308427   47308 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 07:29:36.308508   47308 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 07:29:36.308989   47308 cli_runner.go:164] Run: docker container inspect functional-148131 --format={{.State.Status}}
I0111 07:29:36.331977   47308 ssh_runner.go:195] Run: systemctl --version
I0111 07:29:36.332030   47308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-148131
I0111 07:29:36.353823   47308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/functional-148131/id_rsa Username:docker}
I0111 07:29:36.456467   47308 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image ls --format yaml --alsologtostderr
2026/01/11 07:29:36 [DEBUG] GET http://127.0.0.1:37211/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-148131 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: d3a266a042abc2c8df00d301d5bc381ca8f30abe9df654d8fc4554e400d8e5d1
repoDigests:
- localhost/minikube-local-cache-test@sha256:a6516142b059536f5fa99a6b97131268d9934eb354f8dc2f76a02b32f8f340d3
repoTags:
- localhost/minikube-local-cache-test:functional-148131
size: "3330"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
- registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "76893520"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 54c6e074ef93c709bfd8e76a38f54a65e9b5a38d25c9cf82e2633a21f89cd009
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:615302383ec847282233669b4c18396aa075b1279ff7729af0dcd99784361659
- public.ecr.aws/docker/library/mysql@sha256:90544b3775490579867a30988d48f0215fc3b88d78d8d62b2c0d96ee9226a2b7
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803768460"
- id: b9d44994d8adde234cc849b6518ae39e786c40b1a7c9cc1de674fb3e7f913fc2
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:92e3aff70715f47c5c05580bbe7ed66cb0625814e71b8885ccdbb6d89496f87f
- public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "63312028"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "71986585"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4944818"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "90844140"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "52763986"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-148131 image ls --format yaml --alsologtostderr:
I0111 07:29:36.304187   47310 out.go:360] Setting OutFile to fd 1 ...
I0111 07:29:36.304455   47310 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:29:36.304466   47310 out.go:374] Setting ErrFile to fd 2...
I0111 07:29:36.304471   47310 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:29:36.304658   47310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
I0111 07:29:36.305244   47310 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 07:29:36.305358   47310 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 07:29:36.305785   47310 cli_runner.go:164] Run: docker container inspect functional-148131 --format={{.State.Status}}
I0111 07:29:36.325478   47310 ssh_runner.go:195] Run: systemctl --version
I0111 07:29:36.325544   47310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-148131
I0111 07:29:36.348603   47310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/functional-148131/id_rsa Username:docker}
I0111 07:29:36.452381   47310 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-148131 ssh pgrep buildkitd: exit status 1 (280.867263ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image build -t localhost/my-image:functional-148131 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-148131 image build -t localhost/my-image:functional-148131 testdata/build --alsologtostderr: (2.024782626s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-148131 image build -t localhost/my-image:functional-148131 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 13c80c4cb4b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-148131
--> 4b3027b33f3
Successfully tagged localhost/my-image:functional-148131
4b3027b33f35baecf2b70f880e096b126a8798b1b80425fd8a19ab4322c133be
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-148131 image build -t localhost/my-image:functional-148131 testdata/build --alsologtostderr:
I0111 07:29:36.828581   47621 out.go:360] Setting OutFile to fd 1 ...
I0111 07:29:36.828737   47621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:29:36.828747   47621 out.go:374] Setting ErrFile to fd 2...
I0111 07:29:36.828752   47621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:29:36.828954   47621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
I0111 07:29:36.829479   47621 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 07:29:36.830097   47621 config.go:182] Loaded profile config "functional-148131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 07:29:36.830533   47621 cli_runner.go:164] Run: docker container inspect functional-148131 --format={{.State.Status}}
I0111 07:29:36.849196   47621 ssh_runner.go:195] Run: systemctl --version
I0111 07:29:36.849239   47621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-148131
I0111 07:29:36.866588   47621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/functional-148131/id_rsa Username:docker}
I0111 07:29:36.965850   47621 build_images.go:162] Building image from path: /tmp/build.3824183156.tar
I0111 07:29:36.965931   47621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0111 07:29:36.973326   47621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3824183156.tar
I0111 07:29:36.976736   47621 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3824183156.tar: stat -c "%s %y" /var/lib/minikube/build/build.3824183156.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3824183156.tar': No such file or directory
I0111 07:29:36.976765   47621 ssh_runner.go:362] scp /tmp/build.3824183156.tar --> /var/lib/minikube/build/build.3824183156.tar (3072 bytes)
I0111 07:29:36.993480   47621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3824183156
I0111 07:29:37.000635   47621 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3824183156 -xf /var/lib/minikube/build/build.3824183156.tar
I0111 07:29:37.008222   47621 crio.go:315] Building image: /var/lib/minikube/build/build.3824183156
I0111 07:29:37.008289   47621 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-148131 /var/lib/minikube/build/build.3824183156 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0111 07:29:38.775279   47621 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-148131 /var/lib/minikube/build/build.3824183156 --cgroup-manager=cgroupfs: (1.766966553s)
I0111 07:29:38.775333   47621 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3824183156
I0111 07:29:38.783142   47621 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3824183156.tar
I0111 07:29:38.790181   47621 build_images.go:218] Built localhost/my-image:functional-148131 from /tmp/build.3824183156.tar
I0111 07:29:38.790208   47621 build_images.go:134] succeeded building to: functional-148131
I0111 07:29:38.790222   47621 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-148131 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131 --alsologtostderr: (1.014679457s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-148131 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131 --alsologtostderr: (1.042692879s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-148131 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-148131
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-148131
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-148131
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (101.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0111 07:30:01.672025    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:01.677276    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:01.687530    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:01.707822    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:01.748050    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:01.828362    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:01.988767    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:02.309347    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:02.949920    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:04.231165    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:06.791400    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:11.912590    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:22.153018    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:30:42.633642    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-928691 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m40.873696904s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 status --alsologtostderr -v 5
E0111 07:31:23.594213    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/StartCluster (101.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-928691 kubectl -- rollout status deployment/busybox: (2.407439206s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-24582 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-t7shd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-tjx4z -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-24582 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-t7shd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-tjx4z -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-24582 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-t7shd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-tjx4z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-24582 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-24582 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-t7shd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-t7shd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-tjx4z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 kubectl -- exec busybox-769dd8b7dd-tjx4z -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-928691 node add --alsologtostderr -v 5: (22.792139115s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-928691 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp testdata/cp-test.txt ha-928691:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1199092607/001/cp-test_ha-928691.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691:/home/docker/cp-test.txt ha-928691-m02:/home/docker/cp-test_ha-928691_ha-928691-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m02 "sudo cat /home/docker/cp-test_ha-928691_ha-928691-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691:/home/docker/cp-test.txt ha-928691-m03:/home/docker/cp-test_ha-928691_ha-928691-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m03 "sudo cat /home/docker/cp-test_ha-928691_ha-928691-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691:/home/docker/cp-test.txt ha-928691-m04:/home/docker/cp-test_ha-928691_ha-928691-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m04 "sudo cat /home/docker/cp-test_ha-928691_ha-928691-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp testdata/cp-test.txt ha-928691-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1199092607/001/cp-test_ha-928691-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m02:/home/docker/cp-test.txt ha-928691:/home/docker/cp-test_ha-928691-m02_ha-928691.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691 "sudo cat /home/docker/cp-test_ha-928691-m02_ha-928691.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m02:/home/docker/cp-test.txt ha-928691-m03:/home/docker/cp-test_ha-928691-m02_ha-928691-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m03 "sudo cat /home/docker/cp-test_ha-928691-m02_ha-928691-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m02:/home/docker/cp-test.txt ha-928691-m04:/home/docker/cp-test_ha-928691-m02_ha-928691-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m04 "sudo cat /home/docker/cp-test_ha-928691-m02_ha-928691-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp testdata/cp-test.txt ha-928691-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1199092607/001/cp-test_ha-928691-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m03:/home/docker/cp-test.txt ha-928691:/home/docker/cp-test_ha-928691-m03_ha-928691.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691 "sudo cat /home/docker/cp-test_ha-928691-m03_ha-928691.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m03:/home/docker/cp-test.txt ha-928691-m02:/home/docker/cp-test_ha-928691-m03_ha-928691-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m02 "sudo cat /home/docker/cp-test_ha-928691-m03_ha-928691-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m03:/home/docker/cp-test.txt ha-928691-m04:/home/docker/cp-test_ha-928691-m03_ha-928691-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m04 "sudo cat /home/docker/cp-test_ha-928691-m03_ha-928691-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp testdata/cp-test.txt ha-928691-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1199092607/001/cp-test_ha-928691-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m04:/home/docker/cp-test.txt ha-928691:/home/docker/cp-test_ha-928691-m04_ha-928691.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691 "sudo cat /home/docker/cp-test_ha-928691-m04_ha-928691.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m04:/home/docker/cp-test.txt ha-928691-m02:/home/docker/cp-test_ha-928691-m04_ha-928691-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m02 "sudo cat /home/docker/cp-test_ha-928691-m04_ha-928691-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 cp ha-928691-m04:/home/docker/cp-test.txt ha-928691-m03:/home/docker/cp-test_ha-928691-m04_ha-928691-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 ssh -n ha-928691-m03 "sudo cat /home/docker/cp-test_ha-928691-m04_ha-928691-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-928691 node stop m02 --alsologtostderr -v 5: (12.599065288s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-928691 status --alsologtostderr -v 5: exit status 7 (717.476951ms)

                                                
                                                
-- stdout --
	ha-928691
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-928691-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-928691-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-928691-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:32:23.994969   67636 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:32:23.995199   67636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:32:23.995215   67636 out.go:374] Setting ErrFile to fd 2...
	I0111 07:32:23.995219   67636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:32:23.995822   67636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:32:23.996130   67636 out.go:368] Setting JSON to false
	I0111 07:32:23.996148   67636 mustload.go:66] Loading cluster: ha-928691
	I0111 07:32:23.996321   67636 notify.go:221] Checking for updates...
	I0111 07:32:23.996887   67636 config.go:182] Loaded profile config "ha-928691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:32:23.996906   67636 status.go:174] checking status of ha-928691 ...
	I0111 07:32:23.997369   67636 cli_runner.go:164] Run: docker container inspect ha-928691 --format={{.State.Status}}
	I0111 07:32:24.015043   67636 status.go:371] ha-928691 host status = "Running" (err=<nil>)
	I0111 07:32:24.015105   67636 host.go:66] Checking if "ha-928691" exists ...
	I0111 07:32:24.015425   67636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-928691
	I0111 07:32:24.032871   67636 host.go:66] Checking if "ha-928691" exists ...
	I0111 07:32:24.033153   67636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:32:24.033193   67636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-928691
	I0111 07:32:24.050377   67636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/ha-928691/id_rsa Username:docker}
	I0111 07:32:24.149954   67636 ssh_runner.go:195] Run: systemctl --version
	I0111 07:32:24.156135   67636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:32:24.168774   67636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:32:24.225694   67636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-11 07:32:24.216559791 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:32:24.226218   67636 kubeconfig.go:125] found "ha-928691" server: "https://192.168.49.254:8443"
	I0111 07:32:24.226254   67636 api_server.go:166] Checking apiserver status ...
	I0111 07:32:24.226296   67636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:32:24.238814   67636 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I0111 07:32:24.247642   67636 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1225/cgroup
	I0111 07:32:24.255544   67636 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-276f063885e4256598a6faa91df6ca28d4bc16bc1b29130754dd6b012a7771b3.scope/container/cgroup.freeze
	I0111 07:32:24.262994   67636 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0111 07:32:24.267007   67636 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0111 07:32:24.267027   67636 status.go:463] ha-928691 apiserver status = Running (err=<nil>)
	I0111 07:32:24.267036   67636 status.go:176] ha-928691 status: &{Name:ha-928691 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:32:24.267052   67636 status.go:174] checking status of ha-928691-m02 ...
	I0111 07:32:24.267298   67636 cli_runner.go:164] Run: docker container inspect ha-928691-m02 --format={{.State.Status}}
	I0111 07:32:24.284787   67636 status.go:371] ha-928691-m02 host status = "Stopped" (err=<nil>)
	I0111 07:32:24.284826   67636 status.go:384] host is not running, skipping remaining checks
	I0111 07:32:24.284833   67636 status.go:176] ha-928691-m02 status: &{Name:ha-928691-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:32:24.284851   67636 status.go:174] checking status of ha-928691-m03 ...
	I0111 07:32:24.285148   67636 cli_runner.go:164] Run: docker container inspect ha-928691-m03 --format={{.State.Status}}
	I0111 07:32:24.303458   67636 status.go:371] ha-928691-m03 host status = "Running" (err=<nil>)
	I0111 07:32:24.303482   67636 host.go:66] Checking if "ha-928691-m03" exists ...
	I0111 07:32:24.303741   67636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-928691-m03
	I0111 07:32:24.321689   67636 host.go:66] Checking if "ha-928691-m03" exists ...
	I0111 07:32:24.321979   67636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:32:24.322022   67636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-928691-m03
	I0111 07:32:24.339970   67636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/ha-928691-m03/id_rsa Username:docker}
	I0111 07:32:24.439048   67636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:32:24.451630   67636 kubeconfig.go:125] found "ha-928691" server: "https://192.168.49.254:8443"
	I0111 07:32:24.451659   67636 api_server.go:166] Checking apiserver status ...
	I0111 07:32:24.451692   67636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:32:24.462097   67636 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	I0111 07:32:24.470329   67636 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1186/cgroup
	I0111 07:32:24.477888   67636 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-bc7d9432fadcab6c3aaaea86f294c24bfb9f5eb64a114a0dbfab908944010917.scope/container/cgroup.freeze
	I0111 07:32:24.484989   67636 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0111 07:32:24.489059   67636 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0111 07:32:24.489082   67636 status.go:463] ha-928691-m03 apiserver status = Running (err=<nil>)
	I0111 07:32:24.489089   67636 status.go:176] ha-928691-m03 status: &{Name:ha-928691-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:32:24.489103   67636 status.go:174] checking status of ha-928691-m04 ...
	I0111 07:32:24.489324   67636 cli_runner.go:164] Run: docker container inspect ha-928691-m04 --format={{.State.Status}}
	I0111 07:32:24.507405   67636 status.go:371] ha-928691-m04 host status = "Running" (err=<nil>)
	I0111 07:32:24.507448   67636 host.go:66] Checking if "ha-928691-m04" exists ...
	I0111 07:32:24.507697   67636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-928691-m04
	I0111 07:32:24.525753   67636 host.go:66] Checking if "ha-928691-m04" exists ...
	I0111 07:32:24.526062   67636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:32:24.526113   67636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-928691-m04
	I0111 07:32:24.543824   67636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/ha-928691-m04/id_rsa Username:docker}
	I0111 07:32:24.643244   67636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:32:24.656005   67636 status.go:176] ha-928691-m04 status: &{Name:ha-928691-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-928691 node start m02 --alsologtostderr -v 5: (7.556903475s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 stop --alsologtostderr -v 5
E0111 07:32:45.515115    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-928691 stop --alsologtostderr -v 5: (45.96248977s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 start --wait true --alsologtostderr -v 5
E0111 07:33:53.275091    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:53.280373    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:53.290641    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:53.310967    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:53.351266    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:53.431598    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:53.592503    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:53.913030    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:54.553249    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:55.834105    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:58.395019    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:34:03.515216    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:34:13.755497    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-928691 start --wait true --alsologtostderr -v 5: (1m1.511871371s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-928691 node delete m03 --alsologtostderr -v 5: (9.84068188s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (47.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 stop --alsologtostderr -v 5
E0111 07:34:34.236148    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:35:01.669093    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:35:15.198111    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-928691 stop --alsologtostderr -v 5: (47.592171571s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-928691 status --alsologtostderr -v 5: exit status 7 (115.547143ms)

                                                
                                                
-- stdout --
	ha-928691
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-928691-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-928691-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:35:21.513967   82035 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:35:21.514235   82035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:35:21.514244   82035 out.go:374] Setting ErrFile to fd 2...
	I0111 07:35:21.514247   82035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:35:21.514411   82035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:35:21.514580   82035 out.go:368] Setting JSON to false
	I0111 07:35:21.514675   82035 mustload.go:66] Loading cluster: ha-928691
	I0111 07:35:21.514709   82035 notify.go:221] Checking for updates...
	I0111 07:35:21.515087   82035 config.go:182] Loaded profile config "ha-928691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:35:21.515103   82035 status.go:174] checking status of ha-928691 ...
	I0111 07:35:21.515521   82035 cli_runner.go:164] Run: docker container inspect ha-928691 --format={{.State.Status}}
	I0111 07:35:21.534653   82035 status.go:371] ha-928691 host status = "Stopped" (err=<nil>)
	I0111 07:35:21.534679   82035 status.go:384] host is not running, skipping remaining checks
	I0111 07:35:21.534686   82035 status.go:176] ha-928691 status: &{Name:ha-928691 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:35:21.534734   82035 status.go:174] checking status of ha-928691-m02 ...
	I0111 07:35:21.535095   82035 cli_runner.go:164] Run: docker container inspect ha-928691-m02 --format={{.State.Status}}
	I0111 07:35:21.553253   82035 status.go:371] ha-928691-m02 host status = "Stopped" (err=<nil>)
	I0111 07:35:21.553290   82035 status.go:384] host is not running, skipping remaining checks
	I0111 07:35:21.553303   82035 status.go:176] ha-928691-m02 status: &{Name:ha-928691-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:35:21.553338   82035 status.go:174] checking status of ha-928691-m04 ...
	I0111 07:35:21.553603   82035 cli_runner.go:164] Run: docker container inspect ha-928691-m04 --format={{.State.Status}}
	I0111 07:35:21.571023   82035 status.go:371] ha-928691-m04 host status = "Stopped" (err=<nil>)
	I0111 07:35:21.571043   82035 status.go:384] host is not running, skipping remaining checks
	I0111 07:35:21.571048   82035 status.go:176] ha-928691-m04 status: &{Name:ha-928691-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (47.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0111 07:35:29.356106    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-928691 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (51.164571696s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 node add --control-plane --alsologtostderr -v 5
E0111 07:36:37.118508    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-928691 node add --control-plane --alsologtostderr -v 5: (43.002172726s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-928691 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (35.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-698286 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-698286 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (35.991566273s)
--- PASS: TestJSONOutput/start/Command (35.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (16.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-698286 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-698286 --output=json --user=testUser: (16.16340261s)
--- PASS: TestJSONOutput/stop/Command (16.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-024710 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-024710 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.302324ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a198879-e439-48fd-a4d8-c041b5cc8052","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-024710] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7685a630-7f4d-4618-876a-6492a4d4dd21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22402"}}
	{"specversion":"1.0","id":"45923a0c-4807-4e38-bd13-07ebc4916c8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"df8d2e0c-7ed3-4687-ba64-f264975af85c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig"}}
	{"specversion":"1.0","id":"e6045e22-573a-4311-b13e-ab473ab22cf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube"}}
	{"specversion":"1.0","id":"e0494356-35cb-4e9d-84f7-c30e9ff4796b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"00dc3171-d032-443d-a9b0-00354383fe0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9d844aa8-3697-400f-8b6d-5197eac35734","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-024710" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-024710
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-377050 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-377050 --network=: (24.920668709s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-377050" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-377050
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-377050: (2.13276982s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (19.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-328024 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-328024 --network=bridge: (17.251678142s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-328024" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-328024
E0111 07:38:53.275024    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-328024: (1.981449525s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (19.25s)

                                                
                                    
x
+
TestKicExistingNetwork (19.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0111 07:38:54.806212    8238 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0111 07:38:54.823796    8238 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0111 07:38:54.823872    8238 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0111 07:38:54.823892    8238 cli_runner.go:164] Run: docker network inspect existing-network
W0111 07:38:54.840832    8238 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0111 07:38:54.840858    8238 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0111 07:38:54.840871    8238 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0111 07:38:54.841026    8238 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 07:38:54.858381    8238 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-97182e010def IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7c:25:92:4b:a6} reservation:<nil>}
I0111 07:38:54.858718    8238 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00204a4e0}
I0111 07:38:54.858741    8238 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0111 07:38:54.858786    8238 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0111 07:38:54.905377    8238 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-399514 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-399514 --network=existing-network: (17.59972814s)
helpers_test.go:176: Cleaning up "existing-network-399514" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-399514
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-399514: (1.972953476s)
I0111 07:39:14.494598    8238 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (19.70s)

                                                
                                    
x
+
TestKicCustomSubnet (23.31s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-156480 --subnet=192.168.60.0/24
E0111 07:39:20.964733    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-156480 --subnet=192.168.60.0/24: (21.169492258s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-156480 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-156480" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-156480
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-156480: (2.11698416s)
--- PASS: TestKicCustomSubnet (23.31s)

                                                
                                    
x
+
TestKicStaticIP (21.34s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-079731 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-079731 --static-ip=192.168.200.200: (19.059273328s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-079731 ip
helpers_test.go:176: Cleaning up "static-ip-079731" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-079731
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-079731: (2.130855481s)
--- PASS: TestKicStaticIP (21.34s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (44.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-638253 --driver=docker  --container-runtime=crio
E0111 07:40:01.666982    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-638253 --driver=docker  --container-runtime=crio: (18.964190011s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-641085 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-641085 --driver=docker  --container-runtime=crio: (19.223079818s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-638253
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-641085
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-641085" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-641085
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-641085: (2.344122481s)
helpers_test.go:176: Cleaning up "first-638253" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-638253
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-638253: (2.353148768s)
--- PASS: TestMinikubeProfile (44.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-342499 --memory=3072 --mount-string /tmp/TestMountStartserial3456688092/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-342499 --memory=3072 --mount-string /tmp/TestMountStartserial3456688092/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.806829193s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-342499 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-363971 --memory=3072 --mount-string /tmp/TestMountStartserial3456688092/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-363971 --memory=3072 --mount-string /tmp/TestMountStartserial3456688092/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.795209588s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-363971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-342499 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-342499 --alsologtostderr -v=5: (1.662889658s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-363971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-363971
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-363971: (1.243666397s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-363971
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-363971: (6.345982764s)
--- PASS: TestMountStart/serial/RestartStopped (7.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-363971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (60.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-981949 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-981949 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (59.680018886s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (60.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-981949 -- rollout status deployment/busybox: (1.981376103s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- exec busybox-769dd8b7dd-khtdk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- exec busybox-769dd8b7dd-pgktj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- exec busybox-769dd8b7dd-khtdk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- exec busybox-769dd8b7dd-pgktj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- exec busybox-769dd8b7dd-khtdk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- exec busybox-769dd8b7dd-pgktj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- exec busybox-769dd8b7dd-khtdk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- exec busybox-769dd8b7dd-khtdk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- exec busybox-769dd8b7dd-pgktj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981949 -- exec busybox-769dd8b7dd-pgktj -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-981949 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-981949 -v=5 --alsologtostderr: (25.46890671s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.14s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-981949 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp testdata/cp-test.txt multinode-981949:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp multinode-981949:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3385700707/001/cp-test_multinode-981949.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp multinode-981949:/home/docker/cp-test.txt multinode-981949-m02:/home/docker/cp-test_multinode-981949_multinode-981949-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m02 "sudo cat /home/docker/cp-test_multinode-981949_multinode-981949-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp multinode-981949:/home/docker/cp-test.txt multinode-981949-m03:/home/docker/cp-test_multinode-981949_multinode-981949-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m03 "sudo cat /home/docker/cp-test_multinode-981949_multinode-981949-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp testdata/cp-test.txt multinode-981949-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp multinode-981949-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3385700707/001/cp-test_multinode-981949-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp multinode-981949-m02:/home/docker/cp-test.txt multinode-981949:/home/docker/cp-test_multinode-981949-m02_multinode-981949.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949 "sudo cat /home/docker/cp-test_multinode-981949-m02_multinode-981949.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp multinode-981949-m02:/home/docker/cp-test.txt multinode-981949-m03:/home/docker/cp-test_multinode-981949-m02_multinode-981949-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m03 "sudo cat /home/docker/cp-test_multinode-981949-m02_multinode-981949-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp testdata/cp-test.txt multinode-981949-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp multinode-981949-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3385700707/001/cp-test_multinode-981949-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp multinode-981949-m03:/home/docker/cp-test.txt multinode-981949:/home/docker/cp-test_multinode-981949-m03_multinode-981949.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949 "sudo cat /home/docker/cp-test_multinode-981949-m03_multinode-981949.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 cp multinode-981949-m03:/home/docker/cp-test.txt multinode-981949-m02:/home/docker/cp-test_multinode-981949-m03_multinode-981949-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 ssh -n multinode-981949-m02 "sudo cat /home/docker/cp-test_multinode-981949-m03_multinode-981949-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-981949 node stop m03: (1.269690393s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-981949 status: exit status 7 (513.378211ms)

                                                
                                                
-- stdout --
	multinode-981949
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-981949-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-981949-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-981949 status --alsologtostderr: exit status 7 (519.020178ms)

                                                
                                                
-- stdout --
	multinode-981949
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-981949-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-981949-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:42:55.063882  142086 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:42:55.064172  142086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:42:55.064182  142086 out.go:374] Setting ErrFile to fd 2...
	I0111 07:42:55.064186  142086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:42:55.064401  142086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:42:55.064575  142086 out.go:368] Setting JSON to false
	I0111 07:42:55.064597  142086 mustload.go:66] Loading cluster: multinode-981949
	I0111 07:42:55.064678  142086 notify.go:221] Checking for updates...
	I0111 07:42:55.065150  142086 config.go:182] Loaded profile config "multinode-981949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:42:55.065176  142086 status.go:174] checking status of multinode-981949 ...
	I0111 07:42:55.065708  142086 cli_runner.go:164] Run: docker container inspect multinode-981949 --format={{.State.Status}}
	I0111 07:42:55.086871  142086 status.go:371] multinode-981949 host status = "Running" (err=<nil>)
	I0111 07:42:55.086896  142086 host.go:66] Checking if "multinode-981949" exists ...
	I0111 07:42:55.087161  142086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-981949
	I0111 07:42:55.105167  142086 host.go:66] Checking if "multinode-981949" exists ...
	I0111 07:42:55.105421  142086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:42:55.105469  142086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-981949
	I0111 07:42:55.123443  142086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/multinode-981949/id_rsa Username:docker}
	I0111 07:42:55.224355  142086 ssh_runner.go:195] Run: systemctl --version
	I0111 07:42:55.230699  142086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:42:55.243290  142086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:42:55.297685  142086 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2026-01-11 07:42:55.287972908 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:42:55.298267  142086 kubeconfig.go:125] found "multinode-981949" server: "https://192.168.67.2:8443"
	I0111 07:42:55.298302  142086 api_server.go:166] Checking apiserver status ...
	I0111 07:42:55.298337  142086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:42:55.309699  142086 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1218/cgroup
	I0111 07:42:55.318094  142086 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1218/cgroup
	I0111 07:42:55.326017  142086 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-71a7b75559933cca7a89d59e9ae259ce3ee63f1a9ef1704bdfa5238afb633ef9.scope/container/cgroup.freeze
	I0111 07:42:55.333248  142086 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0111 07:42:55.337654  142086 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0111 07:42:55.337684  142086 status.go:463] multinode-981949 apiserver status = Running (err=<nil>)
	I0111 07:42:55.337696  142086 status.go:176] multinode-981949 status: &{Name:multinode-981949 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:42:55.337720  142086 status.go:174] checking status of multinode-981949-m02 ...
	I0111 07:42:55.338003  142086 cli_runner.go:164] Run: docker container inspect multinode-981949-m02 --format={{.State.Status}}
	I0111 07:42:55.356129  142086 status.go:371] multinode-981949-m02 host status = "Running" (err=<nil>)
	I0111 07:42:55.356151  142086 host.go:66] Checking if "multinode-981949-m02" exists ...
	I0111 07:42:55.356484  142086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-981949-m02
	I0111 07:42:55.374738  142086 host.go:66] Checking if "multinode-981949-m02" exists ...
	I0111 07:42:55.375033  142086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:42:55.375092  142086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-981949-m02
	I0111 07:42:55.393482  142086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22402-4689/.minikube/machines/multinode-981949-m02/id_rsa Username:docker}
	I0111 07:42:55.493983  142086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:42:55.506303  142086 status.go:176] multinode-981949-m02 status: &{Name:multinode-981949-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:42:55.506337  142086 status.go:174] checking status of multinode-981949-m03 ...
	I0111 07:42:55.506593  142086 cli_runner.go:164] Run: docker container inspect multinode-981949-m03 --format={{.State.Status}}
	I0111 07:42:55.524572  142086 status.go:371] multinode-981949-m03 host status = "Stopped" (err=<nil>)
	I0111 07:42:55.524593  142086 status.go:384] host is not running, skipping remaining checks
	I0111 07:42:55.524598  142086 status.go:176] multinode-981949-m03 status: &{Name:multinode-981949-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-981949 node start m03 -v=5 --alsologtostderr: (6.492643829s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-981949
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-981949
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-981949: (31.31456186s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-981949 --wait=true -v=5 --alsologtostderr
E0111 07:43:53.275526    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-981949 --wait=true -v=5 --alsologtostderr: (51.261726951s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-981949
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-981949 node delete m03: (5.439039302s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-981949 stop: (30.166927462s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 status
E0111 07:45:01.666166    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-981949 status: exit status 7 (100.51492ms)

                                                
                                                
-- stdout --
	multinode-981949
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-981949-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-981949 status --alsologtostderr: exit status 7 (99.913463ms)

                                                
                                                
-- stdout --
	multinode-981949
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-981949-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:45:01.795947  151930 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:45:01.796076  151930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:45:01.796086  151930 out.go:374] Setting ErrFile to fd 2...
	I0111 07:45:01.796090  151930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:45:01.796332  151930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:45:01.796528  151930 out.go:368] Setting JSON to false
	I0111 07:45:01.796556  151930 mustload.go:66] Loading cluster: multinode-981949
	I0111 07:45:01.796723  151930 notify.go:221] Checking for updates...
	I0111 07:45:01.797066  151930 config.go:182] Loaded profile config "multinode-981949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:45:01.797089  151930 status.go:174] checking status of multinode-981949 ...
	I0111 07:45:01.797690  151930 cli_runner.go:164] Run: docker container inspect multinode-981949 --format={{.State.Status}}
	I0111 07:45:01.816855  151930 status.go:371] multinode-981949 host status = "Stopped" (err=<nil>)
	I0111 07:45:01.816880  151930 status.go:384] host is not running, skipping remaining checks
	I0111 07:45:01.816886  151930 status.go:176] multinode-981949 status: &{Name:multinode-981949 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:45:01.816909  151930 status.go:174] checking status of multinode-981949-m02 ...
	I0111 07:45:01.817169  151930 cli_runner.go:164] Run: docker container inspect multinode-981949-m02 --format={{.State.Status}}
	I0111 07:45:01.835316  151930 status.go:371] multinode-981949-m02 host status = "Stopped" (err=<nil>)
	I0111 07:45:01.835342  151930 status.go:384] host is not running, skipping remaining checks
	I0111 07:45:01.835350  151930 status.go:176] multinode-981949-m02 status: &{Name:multinode-981949-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-981949 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-981949 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.836621221s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981949 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-981949
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-981949-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-981949-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.819269ms)

                                                
                                                
-- stdout --
	* [multinode-981949-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-981949-m02' is duplicated with machine name 'multinode-981949-m02' in profile 'multinode-981949'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-981949-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-981949-m03 --driver=docker  --container-runtime=crio: (19.428741982s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-981949
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-981949: exit status 80 (299.283382ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-981949 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-981949-m03 already exists in multinode-981949-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-981949-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-981949-m03: (2.338362797s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.20s)

                                                
                                    
x
+
TestScheduledStopUnix (95.25s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-369235 --memory=3072 --driver=docker  --container-runtime=crio
E0111 07:46:24.716964    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-369235 --memory=3072 --driver=docker  --container-runtime=crio: (19.285805529s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-369235 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 07:46:34.983683  161852 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:46:34.983941  161852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:46:34.983952  161852 out.go:374] Setting ErrFile to fd 2...
	I0111 07:46:34.983956  161852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:46:34.984113  161852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:46:34.984422  161852 out.go:368] Setting JSON to false
	I0111 07:46:34.984513  161852 mustload.go:66] Loading cluster: scheduled-stop-369235
	I0111 07:46:34.984797  161852 config.go:182] Loaded profile config "scheduled-stop-369235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:46:34.984888  161852 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/scheduled-stop-369235/config.json ...
	I0111 07:46:34.985067  161852 mustload.go:66] Loading cluster: scheduled-stop-369235
	I0111 07:46:34.985161  161852 config.go:182] Loaded profile config "scheduled-stop-369235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-369235 -n scheduled-stop-369235
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 07:46:35.380060  162007 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:46:35.380152  162007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:46:35.380156  162007 out.go:374] Setting ErrFile to fd 2...
	I0111 07:46:35.380161  162007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:46:35.380346  162007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:46:35.380576  162007 out.go:368] Setting JSON to false
	I0111 07:46:35.380767  162007 daemonize_unix.go:73] killing process 161887 as it is an old scheduled stop
	I0111 07:46:35.380883  162007 mustload.go:66] Loading cluster: scheduled-stop-369235
	I0111 07:46:35.381294  162007 config.go:182] Loaded profile config "scheduled-stop-369235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:46:35.381396  162007 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/scheduled-stop-369235/config.json ...
	I0111 07:46:35.381629  162007 mustload.go:66] Loading cluster: scheduled-stop-369235
	I0111 07:46:35.381761  162007 config.go:182] Loaded profile config "scheduled-stop-369235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I0111 07:46:35.387073    8238 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/scheduled-stop-369235/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-369235 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-369235 -n scheduled-stop-369235
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-369235
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-369235 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 07:47:01.295100  162715 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:47:01.295372  162715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:47:01.295382  162715 out.go:374] Setting ErrFile to fd 2...
	I0111 07:47:01.295386  162715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:47:01.295589  162715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:47:01.295862  162715 out.go:368] Setting JSON to false
	I0111 07:47:01.295934  162715 mustload.go:66] Loading cluster: scheduled-stop-369235
	I0111 07:47:01.296214  162715 config.go:182] Loaded profile config "scheduled-stop-369235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:47:01.296285  162715 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/scheduled-stop-369235/config.json ...
	I0111 07:47:01.296468  162715 mustload.go:66] Loading cluster: scheduled-stop-369235
	I0111 07:47:01.296566  162715 config.go:182] Loaded profile config "scheduled-stop-369235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-369235
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-369235: exit status 7 (78.386295ms)

                                                
                                                
-- stdout --
	scheduled-stop-369235
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-369235 -n scheduled-stop-369235
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-369235 -n scheduled-stop-369235: exit status 7 (75.690297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-369235" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-369235
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-369235: (4.433050672s)
--- PASS: TestScheduledStopUnix (95.25s)

                                                
                                    
x
+
TestInsufficientStorage (11.72s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-671147 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-671147 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.240783367s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a107228-03fc-46ae-8aa2-5da0e9048ac5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-671147] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7127c1a3-b22d-4f4a-82f8-7c5419a8c4e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22402"}}
	{"specversion":"1.0","id":"885322a4-ec17-4a5f-9ade-f667478418d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4fb96b42-aa89-4a53-8539-2deea86534a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig"}}
	{"specversion":"1.0","id":"af2e16c4-c717-4e7e-b487-75ce9354cdb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube"}}
	{"specversion":"1.0","id":"f98b4d1a-f404-4b87-a62e-5dbf9e4ca55f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b95f8a32-49ff-4fae-8a34-32ea45fbed74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ae3338be-9d72-4a1e-83ae-ae5af51d44fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e67b5d1b-ee1c-4c56-9f2e-61ef3ad3cb6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a1d78e0c-832b-4e5b-8de6-ccb3fad4b1b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"aef2b29d-1d4f-4f3c-b571-df3768c6f63f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0edb7ddd-b8eb-4793-a480-96d91a3287db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-671147\" primary control-plane node in \"insufficient-storage-671147\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"48c53bad-f465-435b-84bf-d3d0322693b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1768032998-22402 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a75b359-b006-4e7b-929e-a29ef5cc843b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3fca7f3-1ac2-4703-b5fa-4a1bf17ff8de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-671147 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-671147 --output=json --layout=cluster: exit status 7 (290.110283ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-671147","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-671147","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 07:48:00.406935  165259 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-671147" does not appear in /home/jenkins/minikube-integration/22402-4689/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-671147 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-671147 --output=json --layout=cluster: exit status 7 (289.40099ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-671147","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-671147","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 07:48:00.697422  165373 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-671147" does not appear in /home/jenkins/minikube-integration/22402-4689/kubeconfig
	E0111 07:48:00.707648  165373 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/insufficient-storage-671147/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-671147" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-671147
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-671147: (1.897515s)
--- PASS: TestInsufficientStorage (11.72s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (71.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3096157543 start -p running-upgrade-010890 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3096157543 start -p running-upgrade-010890 --memory=3072 --vm-driver=docker  --container-runtime=crio: (45.994552298s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-010890 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-010890 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.35188802s)
helpers_test.go:176: Cleaning up "running-upgrade-010890" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-010890
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-010890: (2.488919861s)
--- PASS: TestRunningBinaryUpgrade (71.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (332.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-416144 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-416144 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.916086269s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-416144 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-416144 --alsologtostderr: (2.459797037s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-416144 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-416144 status --format={{.Host}}: exit status 7 (77.113818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-416144 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-416144 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m53.259190889s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-416144 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-416144 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-416144 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (110.417298ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-416144] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-416144
	    minikube start -p kubernetes-upgrade-416144 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4161442 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-416144 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-416144 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-416144 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.301516583s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-416144" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-416144
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-416144: (2.844856653s)
--- PASS: TestKubernetesUpgrade (332.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (68.42s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.959925400 start -p missing-upgrade-543405 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.959925400 start -p missing-upgrade-543405 --memory=3072 --driver=docker  --container-runtime=crio: (20.792658866s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-543405
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-543405: (12.368136849s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-543405
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-543405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-543405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.361634028s)
helpers_test.go:176: Cleaning up "missing-upgrade-543405" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-543405
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-543405: (2.361038343s)
--- PASS: TestMissingContainerUpgrade (68.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestPause/serial/Start (58.51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-979859 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-979859 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (58.511659919s)
--- PASS: TestPause/serial/Start (58.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (305.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3510069632 start -p stopped-upgrade-992502 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3510069632 start -p stopped-upgrade-992502 --memory=3072 --vm-driver=docker  --container-runtime=crio: (44.910192804s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3510069632 -p stopped-upgrade-992502 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3510069632 -p stopped-upgrade-992502 stop: (2.257515823s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-992502 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0111 07:48:53.276003    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-992502 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m17.865247646s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (305.03s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-979859 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-979859 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.176330638s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-956360 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-956360 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (96.307323ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-956360] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (20.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-956360 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-956360 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.050021172s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-956360 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (20.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-956360 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-956360 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.728326099s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-956360 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-956360 status -o json: exit status 2 (325.869721ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-956360","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-956360
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-956360: (2.000815858s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-956360 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0111 07:50:01.666547    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-956360 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.584272295s)
--- PASS: TestNoKubernetes/serial/Start (6.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22402-4689/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-956360 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-956360 "sudo systemctl is-active --quiet service kubelet": exit status 1 (311.492477ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (36.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
E0111 07:50:16.325497    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (17.096604305s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (19.86690153s)
--- PASS: TestNoKubernetes/serial/ProfileList (36.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-181803 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-181803 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (160.74305ms)

                                                
                                                
-- stdout --
	* [false-181803] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:50:25.760919  203914 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:50:25.761029  203914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:50:25.761038  203914 out.go:374] Setting ErrFile to fd 2...
	I0111 07:50:25.761041  203914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:50:25.761252  203914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-4689/.minikube/bin
	I0111 07:50:25.761698  203914 out.go:368] Setting JSON to false
	I0111 07:50:25.762697  203914 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1974,"bootTime":1768115852,"procs":286,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0111 07:50:25.762749  203914 start.go:143] virtualization: kvm guest
	I0111 07:50:25.764663  203914 out.go:179] * [false-181803] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0111 07:50:25.766006  203914 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:50:25.766009  203914 notify.go:221] Checking for updates...
	I0111 07:50:25.768383  203914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:50:25.769578  203914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-4689/kubeconfig
	I0111 07:50:25.770761  203914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-4689/.minikube
	I0111 07:50:25.771828  203914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0111 07:50:25.772796  203914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:50:25.774287  203914 config.go:182] Loaded profile config "NoKubernetes-956360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0111 07:50:25.774430  203914 config.go:182] Loaded profile config "kubernetes-upgrade-416144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 07:50:25.774539  203914 config.go:182] Loaded profile config "stopped-upgrade-992502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0111 07:50:25.774629  203914 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:50:25.798704  203914 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0111 07:50:25.798896  203914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:50:25.858133  203914 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-11 07:50:25.847543817 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0111 07:50:25.858244  203914 docker.go:319] overlay module found
	I0111 07:50:25.860836  203914 out.go:179] * Using the docker driver based on user configuration
	I0111 07:50:25.862090  203914 start.go:309] selected driver: docker
	I0111 07:50:25.862105  203914 start.go:928] validating driver "docker" against <nil>
	I0111 07:50:25.862120  203914 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:50:25.863679  203914 out.go:203] 
	W0111 07:50:25.864741  203914 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0111 07:50:25.865788  203914 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-181803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-181803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-181803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 11 Jan 2026 07:49:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-416144
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 11 Jan 2026 07:48:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-992502
contexts:
- context:
cluster: kubernetes-upgrade-416144
user: kubernetes-upgrade-416144
name: kubernetes-upgrade-416144
- context:
cluster: stopped-upgrade-992502
user: stopped-upgrade-992502
name: stopped-upgrade-992502
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-416144
user:
client-certificate: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kubernetes-upgrade-416144/client.crt
client-key: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kubernetes-upgrade-416144/client.key
- name: stopped-upgrade-992502
user:
client-certificate: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/stopped-upgrade-992502/client.crt
client-key: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/stopped-upgrade-992502/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-181803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-181803"

                                                
                                                
----------------------- debugLogs end: false-181803 [took: 3.028488921s] --------------------------------
helpers_test.go:176: Cleaning up "false-181803" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-181803
--- PASS: TestNetworkPlugins/group/false (3.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-956360
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-956360: (1.28920422s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-956360 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-956360 --driver=docker  --container-runtime=crio: (6.604368801s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-956360 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-956360 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.353485ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (49.62s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-391258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-391258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (42.906457797s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-391258 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-391258
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-391258: (6.165307255s)
--- PASS: TestPreload/Start-NoPreload-PullImage (49.62s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (50.34s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-391258 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-391258 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (50.105991876s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-391258 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (50.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-992502
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.019222787s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0111 07:53:53.275292    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (39.874780278s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (39.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-181803 "pgrep -a kubelet"
I0111 07:53:53.858387    8238 config.go:182] Loaded profile config "auto-181803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-181803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-nxpxz" [0d44a306-1cb0-47dd-8cdd-3fa33dea9149] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-nxpxz" [0d44a306-1cb0-47dd-8cdd-3fa33dea9149] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003726641s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-gf5sm" [e9837ecb-c598-47c6-88e5-084e1b4be87c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003134923s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-181803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-181803 "pgrep -a kubelet"
I0111 07:54:06.338920    8238 config.go:182] Loaded profile config "kindnet-181803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-181803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-tv2bp" [4473ffdc-471f-4410-9db5-0c7e22558bc1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-tv2bp" [4473ffdc-471f-4410-9db5-0c7e22558bc1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.005009036s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-181803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.806834192s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.359829667s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m5.017888708s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (42.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0111 07:55:01.666648    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/addons-171536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (42.23448195s)
--- PASS: TestNetworkPlugins/group/flannel/Start (42.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-181803 "pgrep -a kubelet"
I0111 07:55:12.038935    8238 config.go:182] Loaded profile config "custom-flannel-181803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-181803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-wskv2" [ec586df7-6f68-45d9-9a5b-4fba2cf48820] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-wskv2" [ec586df7-6f68-45d9-9a5b-4fba2cf48820] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.0038716s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-f2msz" [3d7daa60-814e-483c-bc0f-9ec81bf363e2] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-f2msz" [3d7daa60-814e-483c-bc0f-9ec81bf363e2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003727846s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-181803 "pgrep -a kubelet"
I0111 07:55:18.818482    8238 config.go:182] Loaded profile config "calico-181803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-181803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-blz6z" [5d944263-5724-4d08-be28-170843e93496] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-blz6z" [5d944263-5724-4d08-be28-170843e93496] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.007019175s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-r9lf6" [aba6e971-2938-4242-8de9-aa7c3a24abb1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004575532s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-181803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-181803 "pgrep -a kubelet"
I0111 07:55:25.784070    8238 config.go:182] Loaded profile config "flannel-181803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-181803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-psz2j" [f69a42e8-1cae-4e29-9bde-0e13457be955] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-psz2j" [f69a42e8-1cae-4e29-9bde-0e13457be955] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003399978s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-181803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-181803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-181803 "pgrep -a kubelet"
I0111 07:55:40.992630    8238 config.go:182] Loaded profile config "enable-default-cni-181803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-181803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-8pr9s" [8118650f-9074-4022-b8d5-07c5ddab8510] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-8pr9s" [8118650f-9074-4022-b8d5-07c5ddab8510] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.008133256s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (44.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-181803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (44.380871989s)
--- PASS: TestNetworkPlugins/group/bridge/Start (44.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-181803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-086314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-086314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.457026262s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-875594 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-875594 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (50.946172671s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (39.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (39.437055352s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (39.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-181803 "pgrep -a kubelet"
I0111 07:56:26.156199    8238 config.go:182] Loaded profile config "bridge-181803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-181803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-xss5x" [6b00dbd9-04a7-4344-88a7-f2af0f38d18a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-xss5x" [6b00dbd9-04a7-4344-88a7-f2af0f38d18a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00343568s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-181803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-181803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-086314 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e83aa813-f5e9-4248-9355-54c56b9cd30d] Pending
helpers_test.go:353: "busybox" [e83aa813-f5e9-4248-9355-54c56b9cd30d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e83aa813-f5e9-4248-9355-54c56b9cd30d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003691501s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-086314 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-875594 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [41b852b9-8aa8-498c-ae93-7ae80b982c8d] Pending
helpers_test.go:353: "busybox" [41b852b9-8aa8-498c-ae93-7ae80b982c8d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [41b852b9-8aa8-498c-ae93-7ae80b982c8d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004347844s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-875594 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-086314 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-086314 --alsologtostderr -v=3: (16.226282848s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-285966 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b919cf59-33aa-47fd-a15a-51037a983c6e] Pending
helpers_test.go:353: "busybox" [b919cf59-33aa-47fd-a15a-51037a983c6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b919cf59-33aa-47fd-a15a-51037a983c6e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003537953s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-285966 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (41.305775465s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-875594 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-875594 --alsologtostderr -v=3: (16.382189652s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-285966 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-285966 --alsologtostderr -v=3: (16.250463765s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-086314 -n old-k8s-version-086314
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-086314 -n old-k8s-version-086314: exit status 7 (82.614709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-086314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-086314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-086314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.756368852s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-086314 -n old-k8s-version-086314
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-875594 -n no-preload-875594
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-875594 -n no-preload-875594: exit status 7 (111.685148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-875594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-875594 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-875594 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (47.260339667s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-875594 -n no-preload-875594
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-285966 -n embed-certs-285966
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-285966 -n embed-certs-285966: exit status 7 (83.798442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-285966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (43.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-285966 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (42.955088428s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-285966 -n embed-certs-285966
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (43.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-585688 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [36c5c6ac-10cc-4c99-b758-401a24ce4d9a] Pending
helpers_test.go:353: "busybox" [36c5c6ac-10cc-4c99-b758-401a24ce4d9a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [36c5c6ac-10cc-4c99-b758-401a24ce4d9a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003214144s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-585688 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-585688 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-585688 --alsologtostderr -v=3: (16.235280802s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5jfjr" [eda0c88c-5a79-4493-85c4-7bc8e8cbd7e0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004042772s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5jfjr" [eda0c88c-5a79-4493-85c4-7bc8e8cbd7e0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003448059s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-086314 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-rwbwb" [66f04145-a1e4-4ff2-a3e8-2d4dd5068888] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003425129s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-crv8j" [1cb69418-06d6-445e-aae6-af5483d3cf76] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004905611s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688: exit status 7 (79.176752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-585688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-585688 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (47.007554725s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585688 -n default-k8s-diff-port-585688
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-086314 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-rwbwb" [66f04145-a1e4-4ff2-a3e8-2d4dd5068888] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004166886s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-875594 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-crv8j" [1cb69418-06d6-445e-aae6-af5483d3cf76] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004762431s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-285966 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-875594 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-285966 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (27.313841196s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.31s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.14s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-476138 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-476138 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (3.935455621s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-476138" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-476138
--- PASS: TestPreload/PreloadSrc/gcs (4.14s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.48s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-044639 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-044639 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (3.878216292s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-044639" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-044639
--- PASS: TestPreload/PreloadSrc/github (4.48s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.5s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-787958 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-787958" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-787958
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (17.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-613901 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-613901 --alsologtostderr -v=3: (17.947108653s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (17.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-z4znd" [7185172b-780d-4902-a2b4-22f2622acddd] Running
E0111 07:58:53.275704    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/functional-148131/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:58:54.046902    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:58:54.052238    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:58:54.062526    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:58:54.082843    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:58:54.123095    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:58:54.203394    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:58:54.363907    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:58:54.684486    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:58:55.325362    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:58:56.605631    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003433155s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-z4znd" [7185172b-780d-4902-a2b4-22f2622acddd] Running
E0111 07:58:59.166282    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:59:00.046042    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kindnet-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:59:00.051352    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kindnet-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:59:00.061601    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kindnet-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:59:00.081871    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kindnet-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:59:00.122186    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kindnet-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:59:00.202448    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kindnet-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:59:00.363033    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kindnet-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:59:00.683606    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kindnet-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:59:01.324314    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kindnet-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00394059s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-585688 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-585688 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-613901 -n newest-cni-613901
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-613901 -n newest-cni-613901: exit status 7 (77.756333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-613901 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0111 07:59:04.286774    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-613901 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (9.992959214s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-613901 -n newest-cni-613901
E0111 07:59:14.527382    8238 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/auto-181803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-613901 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-181803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-181803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-181803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 11 Jan 2026 07:49:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-416144
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 11 Jan 2026 07:48:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-992502
contexts:
- context:
cluster: kubernetes-upgrade-416144
user: kubernetes-upgrade-416144
name: kubernetes-upgrade-416144
- context:
cluster: stopped-upgrade-992502
user: stopped-upgrade-992502
name: stopped-upgrade-992502
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-416144
user:
client-certificate: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kubernetes-upgrade-416144/client.crt
client-key: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kubernetes-upgrade-416144/client.key
- name: stopped-upgrade-992502
user:
client-certificate: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/stopped-upgrade-992502/client.crt
client-key: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/stopped-upgrade-992502/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-181803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-181803"

                                                
                                                
----------------------- debugLogs end: kubenet-181803 [took: 3.117733513s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-181803" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-181803
--- SKIP: TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-181803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-181803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 11 Jan 2026 07:49:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-416144
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22402-4689/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 11 Jan 2026 07:48:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-992502
contexts:
- context:
cluster: kubernetes-upgrade-416144
user: kubernetes-upgrade-416144
name: kubernetes-upgrade-416144
- context:
cluster: stopped-upgrade-992502
user: stopped-upgrade-992502
name: stopped-upgrade-992502
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-416144
user:
client-certificate: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kubernetes-upgrade-416144/client.crt
client-key: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/kubernetes-upgrade-416144/client.key
- name: stopped-upgrade-992502
user:
client-certificate: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/stopped-upgrade-992502/client.crt
client-key: /home/jenkins/minikube-integration/22402-4689/.minikube/profiles/stopped-upgrade-992502/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-181803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-181803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-181803"

                                                
                                                
----------------------- debugLogs end: cilium-181803 [took: 3.392244437s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-181803" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-181803
--- SKIP: TestNetworkPlugins/group/cilium (3.57s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-768607" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-768607
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard